diff --git a/README.md b/README.md index 08f9f52..61c5e22 100644 --- a/README.md +++ b/README.md @@ -42,6 +42,8 @@ Here's a list of utilities you can find in lisflood-utilities package. * __compare__ is a package containing a set of simple Python classes that helps to compare netCDF, PCRaster and TSS files. +* __water-demand-historic__ is a package allowing to generate sectoral (domestic, livestock, industry, and thermoelectric) water demand maps with monthly to yearly temporal steps for a range of past years, at the users’ defined spatial resolution and geographical extent. These maps are required by the LISFLOOD OS [water use module](https://ec-jrc.github.io/lisflood-model/2_18_stdLISFLOOD_water-use) + * __waterregions__ is a package containing two scripts that allow to create and verify a water regions map, respectively. The package contains convenient classes for reading/writing: @@ -350,6 +352,26 @@ options: Output thresholds file ``` +## water-demand-historic + +This utility allows to create water demand maps at the desired resolution and for the desired geographical areas. The maps indicate, for each pixel, the time-varying water demand map to supply for domestic, livestock, industrial, and thermoelectric water consumption. The temporal discretization is monthly for domestic and energy demand, yearly for industrial and livestock demand. The maps of sectoral water demand are required by the LISFLOOD OS [water use module](https://ec-jrc.github.io/lisflood-model/2_18_stdLISFLOOD_water-use/). Clearly, the sectoral water demand maps and the scripts of this utility can be used also for other applications, as well as for stand-alone analysis of historical water demand for anthropogenic use. + +#### Input +The creation of the sectoral water demand maps requires a template map that defines the desired geographical area and spatial resolution. The generation of the maps relies on a number of external datasets (examples are the [Global Human Settlement - Datasets - European Commission (europa.eu)](https://ghsl.jrc.ec.europa.eu/datasets.php) and [FAO AQUASTAT Dissemination System](https://data.apps.fao.org/aquastat/?lang=en)). The locations of the template map, of the input datasets and files, of the output folder, and other users’ choices (e.g. start year and end year) are specified in a configuration file. The syntax of the configuration file is pre-defined and an example is provided to the users. The complete list of external datasets, the instructions on how to prepare (i) the external dataset, (ii) the template map, (iii) the input folder, (iv) the output folder, and (v) the configuration file are explained into details [here](src/lisfloodutilities/water-demand-historic/README.md) + +#### Output +Four sectoral water demand maps in netCDF-4 format. The geographical extent and the spatial resolution are defined by the template map (users-defined input file). Each netCDF-4 file has 12 months, for each year included in the temporal time span identified by the user. Sectoral water demand data with lower (yearly) temporal resolution are repeated 12 times. + +#### Usage +The methodology includes five main steps. The instructions on how to retrieve the scrips, create the environment including all the required packages, and use the utility are provided [here](src/lisfloodutilities/water-demand-historic/README.md) + +#### Important notes on documentation and data availability +The complete list of external datasets, the instructions on how to retrieve the external datasets, the methodology, and the usage of the scripts are explained into details [here](src/lisfloodutilities/water-demand-historic/README.md). The README file provides detailed technical information about the input datasets and the usage of this utility. The methodology is explained in the manuscript: Choulga, M., Moschini, F., Mazzetti, C., Grimaldi, S., Disperati, J., Beck, H., Salamon, P., and Prudhomme, C.: Technical note: Surface fields for global environmental modelling, EGUsphere, 2023 ([preprint](https://doi.org/10.5194/egusphere-2023-1306)). + +The global sectoral water demand maps at 3 arcmin (or 0.05 degrees) resolution, 1979-2019, produced using the scripts of this utility can be downloaded from [Joint Research Centre Data Catalogue - LISFLOOD static and parameter maps for GloFAS - European Commission (europa.eu)](https://data.jrc.ec.europa.eu/dataset/68050d73-9c06-499c-a441-dc5053cb0c86) + + + ## waterregions The modelling of water abstraction for domestic, industrial, energetic, agricoltural and livestock use can require a map of the water regions. The concept of water regions and information for their definition are explained [here](htpst://ec-jrc.github.io/lisflood-model/2_18_stdLISFLOOD_water-use/). diff --git a/src/lisfloodutilities/VERSION b/src/lisfloodutilities/VERSION index 330a1e9..d23da87 100644 --- a/src/lisfloodutilities/VERSION +++ b/src/lisfloodutilities/VERSION @@ -1 +1 @@ -0.12.21 +0.12.22 diff --git a/src/lisfloodutilities/water-demand-historic/README.md b/src/lisfloodutilities/water-demand-historic/README.md new file mode 100644 index 0000000..76c67df --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/README.md @@ -0,0 +1,98 @@ +# Overview + +This utility allows to create sectoral water demand maps at the desired spatial resolution and for the desired geographical areas. The maps indicate, for each pixel, the time-varying water demand map to supply for domestic, livestock, industrial, and energetic (cooling) water consumption. The temporal discretization of the information is monthly for domestic and energy demand, yearly for industrial and livestock demand. The users can select the historical temporal span of the output datasets (start year and end year). The maps of sectoral water demand are required by the LISFLOOD OS [water use module](https://ec-jrc.github.io/lisflood-model/2_18_stdLISFLOOD_water-use). Clearly, the scripts and the outputs of this utility can be used also for other applications, as well as for stand-alone analysis of historical water demand for anthropogenic use. Due to the large uncertainty of the global data sets, the distinction between demand and withdrawal is here omitted. + +This README file provides detailed technical information about the input datasets and the usage of this utility. The methodology is explained in the manuscript: Choulga, M., Moschini, F., Mazzetti, C., Grimaldi, S., Disperati, J., Beck, H., Salamon, P., and Prudhomme, C.: Technical note: Surface fields for global environmental modelling, EGUsphere, 2023 ([preprint](https://doi.org/10.5194/egusphere-2023-1306)). + +The global sectoral water demand maps at 3 arcmin (or 0.05 degrees) resolution, 1979-2019, produced using the scripts of this utility can be downloaded from [Joint Research Centre Data Catalogue - LISFLOOD static and parameter maps for GloFAS - European Commission (europa.eu)](https://data.jrc.ec.europa.eu/dataset/68050d73-9c06-499c-a441-dc5053cb0c86) + +# Data + +The utility consists of five scripts which require several following datasets and files. The full list of datasets is provided below. For each dataset, this documentation provides the link and the instructions relative to available version at the time of creation of the scripts. The correct functioning of the scripts is guaranteed for those datasets. Clearly, updated versions are regularly published: one relevant example is [Global Human Settlement - Datasets - European Commission (europa.eu)](https://ghsl.jrc.ec.europa.eu/datasets.php) (see Global Human Settlement Layer (GHSL)). Albeit the functioning of the utility is expected to be maintained, the users are invited to report problems or suggestions via GitHub issues. +1. [FAO AQUASTAT](http://fao.org/aquastat/statistics/query/index.html?lang=en) sectoral water withdrawal estimates. Select "All Countries" and the following seven fields in the variable groups "Geography and population" and "Water use": "Gross Domestic Product (GDP)", "Industry, value added to GDP", "Agricultural water withdrawal", "Industrial water withdrawal", "Municipal water withdrawal", "Total water withdrawal", and "Irrigation water withdrawal". Select All Years, then click on the "Show Data" button. On the new page, click on the "Download" button that will appear on the top right corner to download the xlsx file containing all data. Convert the Excel file to CVS and save it as `aquastat_clean.csv` in `aquastat_folder` specified in the configuration file. +1. [Global Human Settlement Layer (GHSL)](https://ghsl.jrc.ec.europa.eu/ghs_pop2019.php) POP R2019A residential population estimates for target years 1975, 1990, 2000, and 2015. You will find data as an archive [here]( https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/GHSL/GHS_POP_MT_GLOBE_R2019A/). For each year, download the 0.0025° (9 arcsec, folders ending by "_9ss", e.g. "GHS_POP_E2015_GLOBE_R2019A_4326_9ss") maps in WGS84 projection and resample them to 0.01° using `gdalwarp -t_srs EPSG:4326 -tr 0.01 0.01 -r average -of GTiff GHS_POP_E_GLOBE_R2019A_4326_9ss_V1_0.tif GHS_POP_E_GLOBE_R2019A_4326_9ss_V1_0_reprojected.tif`, where `` is 1975, 1990, 2000, or 2015. We resample the maps using `average` instead of `sum` as the latter is broken. Put resampled files in `ghsl_folder`. +1. [Global Change Analysis Model (GCAM)](https://github.com/JGCRI/gcam-core/releases) regional water withdrawal and electricity consumption estimates. Download the Windows release package version 5.4, execute `run-gcam.bat`, wait for the model to finish. Execute `run-model-interface.bat`, click "File" > "Open" > "DB Open", select `output/database_basexdb`, and select all scenarios and all regions. Select `water demand`, select `water withdrawals by sector`, and click "Run query". Select `energy transformation`, select `electricity`, select `elec consumption by demand sector`, and click "Run query". For each tab, select all data with ctrl-a and click "Edit" > "Copy". Open a blank [Google Sheets](http://sheets.google.com/) spreadsheet, press ctrl-v, manually add the headers, click "File" > "Download" > "Comma-separated values", save as `water_demand.csv` and `elec_consumption.csv`, respectively, and put both files in `gcam_folder`. +1. [Gridded Livestock of the World (GLW 3)](https://doi.org/10.1038/sdata.2018.227) species distribution dataset. Download the eight zip files (each representing a different species) and extract them to `glw_folder`. +1. [Huang et al. (2018)](https://doi.org/10.5281/zenodo.1209296) global gridded water withdrawal estimates. These estimates are not incorporated in our dataset, but are only used for the sake of comparison. Download all 7z files and extract them to `huang_folder`. +1. [Multi-Source Weather (MSWX)](http://www.gloh2o.org/mswx) daily and monthly mean air temperature. Download using rclone as explained in the FAQ on the web page. Put the daily and monthly netCDFs in `/Past/Temp/Daily` and `/Past/Temp/Monthly`, respectively, where `mswx_folder` is specified in the configuration file. +1. [US Census Bureau](https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html) state borders shape file. Download `cb_2018_us_state_500k.zip`, unzip it, open the shape file in QGIS, open the Field Calculator, enter `STATEFP` in "Expression", enter `STATEFP_` in "Output field name", and click OK. Rasterize to 0.01° using `gdal_rasterize -l cb_2018_us_state_500k -a STATEFP_ -tr 0.01 0.01 -a_nodata 0.0 -te -180.0 -90.0 180.0 90.0 -ot Float32 -of GTiff cb_2018_us_state_500k.shp cb_2018_us_state_500k_rasterized.tif`. Put `cb_2018_us_state_500k_rasterized.tif` in `us_states_folder`. +1. [USGS NWIS](https://waterdata.usgs.gov/nv/nwis/wu) water withdrawal estimates for 1985--present. For each state in the "Geographic Area" drop-down menu, select "State Data", "ALL Years", "State Total", and "ALL Categories" and click "Submit". Then select "Tab-separated data" and click "Submit". Do this for each state and put files in `usgs_water_use_folder`. +1. USGS water withdrawal data files for [1985](https://water.usgs.gov/watuse/data/1985/index.html) and [1990](https://water.usgs.gov/watuse/data/1990/index.html) (to supplement the NWIS data). Download "Data file of state-level data" for each year (`us85st.txt` and `us90st.txt`) and put the files in `usgs_water_use_folder`. +1. [Vassolo and Döll (2005)](https://doi.org/10.1029/2004WR003360) industrial and thermoelectric water withdrawal maps (included in this repository with permission from Petra Döll). The industrial withdrawal shape file is rasterized using `gdal_rasterize -l "industry_paper_vassolo&doell" -a MANUF_WWD -tr 0.5 0.5 -a_nodata 0.0 -te -180.0 -90.0 180.0 90.0 -ot Float32 -of GTiff "industry_paper_vassolo&doell.shp" manuf_wwd.tif`. Thermoelectric shape file rasterized using same command but with `WWD_PS` and `wwd_ps.tif`. Put the files in `vassolo_doll_folder`. +1. [Thematic Mapping](https://thematicmapping.org/downloads/world_borders.php) country borders shape file. Download `TM_WORLD_BORDERS-0.3.zip`, unzip, and rasterize to 0.01° using `gdal_rasterize -l TM_WORLD_BORDERS-0.3 -a UN -tr 0.01 0.01 -a_nodata 0.0 -te -180.0 -90.0 180.0 90.0 -ot Float32 -of GTiff TM_WORLD_BORDERS-0.3.shp TM_WORLD_BORDERS_UN_rasterized.tif`. Put the result in `world_borders_folder`. +1. [World Bank](https://data.worldbank.org/) manufacturing value added and gross domestic product data. Search for "Manufacturing, value added (constant 2010 US$)" and "GDP (constant 2010 US$)", download as CSV, and put in `world_bank_folder` (Remove Metadata csv files, if any). + + +The locations of the datasets and files are specified in the configuration file (see the `config.cfg` example). The script also requires a template map which defines the output resolution and area. The template map should be in netCDF-4 format and contain `lat` and `lon` variables and a data variable (any name). The location of the template map is specified using `templatemap_path` in the configuration file. The data are produced for the period spanning `year_start` to `year_end` and saved in netCDF-4 format to `output_folder` (all specified in the configuration file). + +# Methods + +The module consists of five scripts: +1. `step1_population_density.py`: + 1. Resamples the native resolution of the GHSL population grids to the template map resolution. + 1. Computes population grids (in density per km2) for each year using linear interpolation and nearest-neighbor extrapolation. +1. `step2_domestic_demand.py`: + 1. Produces a map of the R parameter (measures the relative withdrawal difference between the warmest and coldest months). + 1. Computes annual time series of population for each country. + 1. Loads the country-scale AQUASTAT municipal water withdrawal estimate and produces annual time series using linear interpolation and population-based forward and backward extrapolation. + 1. Loads the state-level USGS withdrawal data and computes annual values using linear interpolation and nearest-neighbour extrapolation. + 1. For each year, computes country- and state-scale per capita water demand and produces a global map. + 1. Annual per capita water demand for countries without any AQUASTAT information is estimated using nearest-neighbour interpolation. + 1. Spatially disaggregates the country- and state-scale annual per capita water demand estimates using the population estimates to obtain global annual withdrawal grids. + 1. Temporally downscales the annual withdrawal grids using monthly MSWX air temperature grids and the map of the R parameter. +1. `step3a_industrial_demand.py`: + 1. Computes country-scale values of the Vassolo and Döll (2005) industrial and thermoelectric withdrawal grids. + 1. Processes the country-scale AQUASTAT industrial withdrawal data. + 1. Processes the country-scale World Bank manufacturing value added (MVA) data. + 1. Processes the GCAM regional industry and thermoelectric withdrawals and spatially downscales them to the country scale based on population. + 1. Processes the GCAM regional electricity consumptions and assigns values to each country. + 1. Computes annual country-scale manufacturing and thermoelectric withdrawal time series as follows: + 1. If AQUASTAT and GCAM data are available, temporally gap-fills the AQUASTAT industrial withdrawals using linear interpolation and MVA- or population-based extrapolation, temporally gap-fills the GCAM manufacturing and thermoelectric withdrawals using linear interpolation and nearest-neighbor extrapolation, rescales the GCAM manufacturing and thermoelectric withdrawals to match the Vassolo and Döll (2005) estimates, and temporally disaggregates the AQUASTAT industrial withdrawals into manufacturing and thermoelectric using GCAM. + 1. If only GCAM data are available, simply use the rescaled GCAM manufacturing and thermoelectric withdrawals. + 1. If neither AQUASTAT nor GCAM data are available, set the manufacturing and thermoelectric withdrawals to zero. + 1. Computes monthly heating degree days (HDD) and cooling degree days (CDD) grids using daily MSWX air temperature. +1. `step3b_industrial_demand.py`: + 1. Loads the annual country-scale manufacturing and thermoelectric withdrawal time series, the USGS manufacturing and thermoelectric withdrawal estimates, and the GCAM electricity consumptions from the preceding scripts. + 1. Gap-fills the annual USGS manufacturing and thermoelectric withdrawal time series using linear interpolation and nearest-neighbor extrapolation. + 1. Spatially downscales the country- and state-scale manufacturing and thermoelectric withdrawals using population grids to obtain annual withdrawal grids. + 1. Temporally downscales the annual thermoelectric withdrawal grids based on the GCAM electricity consumptions and the HDD and CDD grids. +1. `step4_livestock_demand.py`: + 1. Computes country-scale livestock withdrawals by taking the difference between the AQUASTAT agriculture and irrigation withdrawals. + 1. Processes the raw GLW grids for the different species and estimates the total livestock mass. + 1. Computes the total livestock mass for each country. + 1. Spatially downscales the GCAM regional livestock withdrawals to the country scale using the GLW-based total livestock mass estimates. + 1. Loops through countries and gap-fills the annual GCAM time series using linear interpolation and nearest-neighbor extrapolation, and rescales the time series to match the AQUASTAT estimate (if available). + 1. Gap-fills the annual USGS manufacturing and thermoelectric withdrawal time series using linear interpolation and nearest-neighbor extrapolation. + 1. For each year, spatially downscales the country- and state-scale livestock withdrawals using the GLW total livestock mass grid. + +# System requirements + +The script can be run on a normal desktop PC, support is provided only for Linux. Physical memory requirements depend on the desired geographical extent and resolution of the maps dataset. For instance, the generation of the sectoral water demand dataset for the global domain at 3arcmin (or 0.05 degrees) resolution required 32 GB. + +# Instructions + +Clone the repository: +``` +git clone https://github.com/ec-jrc/lisflood-utilities/water-demand-historic +cd lisflood-utilities/water-demand-historic +``` +Produce a configuration file with the correct paths and folders (based on the [included example](config_global_0p1deg.cfg)). + +Create and activate a Conda environment and run the scripts as follows: +``` +conda create --name --file requirements.txt +conda activate +python step1_population_density.py +python step2_domestic_demand.py +... +``` +If the environment creation step fails, the following might work: +``` +conda create -n -c conda-forge geopandas h5py pandas numpy netcdf4 matplotlib rasterio scikit-image openpyxl geopy +``` \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/ancillary_data/un_country_codes.csv b/src/lisfloodutilities/water-demand-historic/ancillary_data/un_country_codes.csv new file mode 100644 index 0000000..f9b67f9 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/ancillary_data/un_country_codes.csv @@ -0,0 +1,250 @@ +name,alpha-2,alpha-3,country-code,iso_3166-2,region,sub-region,intermediate-region,region-code,sub-region-code,intermediate-region-code +Afghanistan,AF,AFG,004,ISO 3166-2:AF,Asia,Southern Asia,"",142,034,"" +Åland Islands,AX,ALA,248,ISO 3166-2:AX,Europe,Northern Europe,"",150,154,"" +Albania,AL,ALB,008,ISO 3166-2:AL,Europe,Southern Europe,"",150,039,"" +Algeria,DZ,DZA,012,ISO 3166-2:DZ,Africa,Northern Africa,"",002,015,"" +American Samoa,AS,ASM,016,ISO 3166-2:AS,Oceania,Polynesia,"",009,061,"" +Andorra,AD,AND,020,ISO 3166-2:AD,Europe,Southern Europe,"",150,039,"" +Angola,AO,AGO,024,ISO 3166-2:AO,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Anguilla,AI,AIA,660,ISO 3166-2:AI,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Antarctica,AQ,ATA,010,ISO 3166-2:AQ,"","","","","","" +Antigua and Barbuda,AG,ATG,028,ISO 3166-2:AG,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Argentina,AR,ARG,032,ISO 3166-2:AR,Americas,Latin America and the Caribbean,South America,019,419,005 +Armenia,AM,ARM,051,ISO 3166-2:AM,Asia,Western Asia,"",142,145,"" +Aruba,AW,ABW,533,ISO 3166-2:AW,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Australia,AU,AUS,036,ISO 3166-2:AU,Oceania,Australia and New Zealand,"",009,053,"" +Austria,AT,AUT,040,ISO 3166-2:AT,Europe,Western Europe,"",150,155,"" +Azerbaijan,AZ,AZE,031,ISO 3166-2:AZ,Asia,Western Asia,"",142,145,"" +Bahamas,BS,BHS,044,ISO 3166-2:BS,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Bahrain,BH,BHR,048,ISO 3166-2:BH,Asia,Western Asia,"",142,145,"" +Bangladesh,BD,BGD,050,ISO 3166-2:BD,Asia,Southern Asia,"",142,034,"" +Barbados,BB,BRB,052,ISO 3166-2:BB,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Belarus,BY,BLR,112,ISO 3166-2:BY,Europe,Eastern Europe,"",150,151,"" +Belgium,BE,BEL,056,ISO 3166-2:BE,Europe,Western Europe,"",150,155,"" +Belize,BZ,BLZ,084,ISO 3166-2:BZ,Americas,Latin America and the Caribbean,Central America,019,419,013 +Benin,BJ,BEN,204,ISO 3166-2:BJ,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Bermuda,BM,BMU,060,ISO 3166-2:BM,Americas,Northern America,"",019,021,"" +Bhutan,BT,BTN,064,ISO 3166-2:BT,Asia,Southern Asia,"",142,034,"" +Bolivia (Plurinational State of),BO,BOL,068,ISO 3166-2:BO,Americas,Latin America and the Caribbean,South America,019,419,005 +"Bonaire, Sint Eustatius and Saba",BQ,BES,535,ISO 3166-2:BQ,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Bosnia and Herzegovina,BA,BIH,070,ISO 3166-2:BA,Europe,Southern Europe,"",150,039,"" +Botswana,BW,BWA,072,ISO 3166-2:BW,Africa,Sub-Saharan Africa,Southern Africa,002,202,018 +Bouvet Island,BV,BVT,074,ISO 3166-2:BV,Americas,Latin America and the Caribbean,South America,019,419,005 +Brazil,BR,BRA,076,ISO 3166-2:BR,Americas,Latin America and the Caribbean,South America,019,419,005 +British Indian Ocean Territory,IO,IOT,086,ISO 3166-2:IO,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Brunei Darussalam,BN,BRN,096,ISO 3166-2:BN,Asia,South-eastern Asia,"",142,035,"" +Bulgaria,BG,BGR,100,ISO 3166-2:BG,Europe,Eastern Europe,"",150,151,"" +Burkina Faso,BF,BFA,854,ISO 3166-2:BF,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Burundi,BI,BDI,108,ISO 3166-2:BI,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Cabo Verde,CV,CPV,132,ISO 3166-2:CV,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Cambodia,KH,KHM,116,ISO 3166-2:KH,Asia,South-eastern Asia,"",142,035,"" +Cameroon,CM,CMR,120,ISO 3166-2:CM,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Canada,CA,CAN,124,ISO 3166-2:CA,Americas,Northern America,"",019,021,"" +Cayman Islands,KY,CYM,136,ISO 3166-2:KY,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Central African Republic,CF,CAF,140,ISO 3166-2:CF,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Chad,TD,TCD,148,ISO 3166-2:TD,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Chile,CL,CHL,152,ISO 3166-2:CL,Americas,Latin America and the Caribbean,South America,019,419,005 +China,CN,CHN,156,ISO 3166-2:CN,Asia,Eastern Asia,"",142,030,"" +Christmas Island,CX,CXR,162,ISO 3166-2:CX,Oceania,Australia and New Zealand,"",009,053,"" +Cocos (Keeling) Islands,CC,CCK,166,ISO 3166-2:CC,Oceania,Australia and New Zealand,"",009,053,"" +Colombia,CO,COL,170,ISO 3166-2:CO,Americas,Latin America and the Caribbean,South America,019,419,005 +Comoros,KM,COM,174,ISO 3166-2:KM,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Congo,CG,COG,178,ISO 3166-2:CG,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +"Congo, Democratic Republic of the",CD,COD,180,ISO 3166-2:CD,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Cook Islands,CK,COK,184,ISO 3166-2:CK,Oceania,Polynesia,"",009,061,"" +Costa Rica,CR,CRI,188,ISO 3166-2:CR,Americas,Latin America and the Caribbean,Central America,019,419,013 +Côte d'Ivoire,CI,CIV,384,ISO 3166-2:CI,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Croatia,HR,HRV,191,ISO 3166-2:HR,Europe,Southern Europe,"",150,039,"" +Cuba,CU,CUB,192,ISO 3166-2:CU,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Curaçao,CW,CUW,531,ISO 3166-2:CW,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Cyprus,CY,CYP,196,ISO 3166-2:CY,Asia,Western Asia,"",142,145,"" +Czechia,CZ,CZE,203,ISO 3166-2:CZ,Europe,Eastern Europe,"",150,151,"" +Denmark,DK,DNK,208,ISO 3166-2:DK,Europe,Northern Europe,"",150,154,"" +Djibouti,DJ,DJI,262,ISO 3166-2:DJ,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Dominica,DM,DMA,212,ISO 3166-2:DM,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Dominican Republic,DO,DOM,214,ISO 3166-2:DO,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Ecuador,EC,ECU,218,ISO 3166-2:EC,Americas,Latin America and the Caribbean,South America,019,419,005 +Egypt,EG,EGY,818,ISO 3166-2:EG,Africa,Northern Africa,"",002,015,"" +El Salvador,SV,SLV,222,ISO 3166-2:SV,Americas,Latin America and the Caribbean,Central America,019,419,013 +Equatorial Guinea,GQ,GNQ,226,ISO 3166-2:GQ,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Eritrea,ER,ERI,232,ISO 3166-2:ER,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Estonia,EE,EST,233,ISO 3166-2:EE,Europe,Northern Europe,"",150,154,"" +Eswatini,SZ,SWZ,748,ISO 3166-2:SZ,Africa,Sub-Saharan Africa,Southern Africa,002,202,018 +Ethiopia,ET,ETH,231,ISO 3166-2:ET,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Falkland Islands (Malvinas),FK,FLK,238,ISO 3166-2:FK,Americas,Latin America and the Caribbean,South America,019,419,005 +Faroe Islands,FO,FRO,234,ISO 3166-2:FO,Europe,Northern Europe,"",150,154,"" +Fiji,FJ,FJI,242,ISO 3166-2:FJ,Oceania,Melanesia,"",009,054,"" +Finland,FI,FIN,246,ISO 3166-2:FI,Europe,Northern Europe,"",150,154,"" +France,FR,FRA,250,ISO 3166-2:FR,Europe,Western Europe,"",150,155,"" +French Guiana,GF,GUF,254,ISO 3166-2:GF,Americas,Latin America and the Caribbean,South America,019,419,005 +French Polynesia,PF,PYF,258,ISO 3166-2:PF,Oceania,Polynesia,"",009,061,"" +French Southern Territories,TF,ATF,260,ISO 3166-2:TF,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Gabon,GA,GAB,266,ISO 3166-2:GA,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Gambia,GM,GMB,270,ISO 3166-2:GM,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Georgia,GE,GEO,268,ISO 3166-2:GE,Asia,Western Asia,"",142,145,"" +Germany,DE,DEU,276,ISO 3166-2:DE,Europe,Western Europe,"",150,155,"" +Ghana,GH,GHA,288,ISO 3166-2:GH,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Gibraltar,GI,GIB,292,ISO 3166-2:GI,Europe,Southern Europe,"",150,039,"" +Greece,GR,GRC,300,ISO 3166-2:GR,Europe,Southern Europe,"",150,039,"" +Greenland,GL,GRL,304,ISO 3166-2:GL,Americas,Northern America,"",019,021,"" +Grenada,GD,GRD,308,ISO 3166-2:GD,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Guadeloupe,GP,GLP,312,ISO 3166-2:GP,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Guam,GU,GUM,316,ISO 3166-2:GU,Oceania,Micronesia,"",009,057,"" +Guatemala,GT,GTM,320,ISO 3166-2:GT,Americas,Latin America and the Caribbean,Central America,019,419,013 +Guernsey,GG,GGY,831,ISO 3166-2:GG,Europe,Northern Europe,Channel Islands,150,154,830 +Guinea,GN,GIN,324,ISO 3166-2:GN,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Guinea-Bissau,GW,GNB,624,ISO 3166-2:GW,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Guyana,GY,GUY,328,ISO 3166-2:GY,Americas,Latin America and the Caribbean,South America,019,419,005 +Haiti,HT,HTI,332,ISO 3166-2:HT,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Heard Island and McDonald Islands,HM,HMD,334,ISO 3166-2:HM,Oceania,Australia and New Zealand,"",009,053,"" +Holy See,VA,VAT,336,ISO 3166-2:VA,Europe,Southern Europe,"",150,039,"" +Honduras,HN,HND,340,ISO 3166-2:HN,Americas,Latin America and the Caribbean,Central America,019,419,013 +Hong Kong,HK,HKG,344,ISO 3166-2:HK,Asia,Eastern Asia,"",142,030,"" +Hungary,HU,HUN,348,ISO 3166-2:HU,Europe,Eastern Europe,"",150,151,"" +Iceland,IS,ISL,352,ISO 3166-2:IS,Europe,Northern Europe,"",150,154,"" +India,IN,IND,356,ISO 3166-2:IN,Asia,Southern Asia,"",142,034,"" +Indonesia,ID,IDN,360,ISO 3166-2:ID,Asia,South-eastern Asia,"",142,035,"" +Iran (Islamic Republic of),IR,IRN,364,ISO 3166-2:IR,Asia,Southern Asia,"",142,034,"" +Iraq,IQ,IRQ,368,ISO 3166-2:IQ,Asia,Western Asia,"",142,145,"" +Ireland,IE,IRL,372,ISO 3166-2:IE,Europe,Northern Europe,"",150,154,"" +Isle of Man,IM,IMN,833,ISO 3166-2:IM,Europe,Northern Europe,"",150,154,"" +Israel,IL,ISR,376,ISO 3166-2:IL,Asia,Western Asia,"",142,145,"" +Italy,IT,ITA,380,ISO 3166-2:IT,Europe,Southern Europe,"",150,039,"" +Jamaica,JM,JAM,388,ISO 3166-2:JM,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Japan,JP,JPN,392,ISO 3166-2:JP,Asia,Eastern Asia,"",142,030,"" +Jersey,JE,JEY,832,ISO 3166-2:JE,Europe,Northern Europe,Channel Islands,150,154,830 +Jordan,JO,JOR,400,ISO 3166-2:JO,Asia,Western Asia,"",142,145,"" +Kazakhstan,KZ,KAZ,398,ISO 3166-2:KZ,Asia,Central Asia,"",142,143,"" +Kenya,KE,KEN,404,ISO 3166-2:KE,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Kiribati,KI,KIR,296,ISO 3166-2:KI,Oceania,Micronesia,"",009,057,"" +Korea (Democratic People's Republic of),KP,PRK,408,ISO 3166-2:KP,Asia,Eastern Asia,"",142,030,"" +"Korea, Republic of",KR,KOR,410,ISO 3166-2:KR,Asia,Eastern Asia,"",142,030,"" +Kuwait,KW,KWT,414,ISO 3166-2:KW,Asia,Western Asia,"",142,145,"" +Kyrgyzstan,KG,KGZ,417,ISO 3166-2:KG,Asia,Central Asia,"",142,143,"" +Lao People's Democratic Republic,LA,LAO,418,ISO 3166-2:LA,Asia,South-eastern Asia,"",142,035,"" +Latvia,LV,LVA,428,ISO 3166-2:LV,Europe,Northern Europe,"",150,154,"" +Lebanon,LB,LBN,422,ISO 3166-2:LB,Asia,Western Asia,"",142,145,"" +Lesotho,LS,LSO,426,ISO 3166-2:LS,Africa,Sub-Saharan Africa,Southern Africa,002,202,018 +Liberia,LR,LBR,430,ISO 3166-2:LR,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Libya,LY,LBY,434,ISO 3166-2:LY,Africa,Northern Africa,"",002,015,"" +Liechtenstein,LI,LIE,438,ISO 3166-2:LI,Europe,Western Europe,"",150,155,"" +Lithuania,LT,LTU,440,ISO 3166-2:LT,Europe,Northern Europe,"",150,154,"" +Luxembourg,LU,LUX,442,ISO 3166-2:LU,Europe,Western Europe,"",150,155,"" +Macao,MO,MAC,446,ISO 3166-2:MO,Asia,Eastern Asia,"",142,030,"" +Madagascar,MG,MDG,450,ISO 3166-2:MG,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Malawi,MW,MWI,454,ISO 3166-2:MW,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Malaysia,MY,MYS,458,ISO 3166-2:MY,Asia,South-eastern Asia,"",142,035,"" +Maldives,MV,MDV,462,ISO 3166-2:MV,Asia,Southern Asia,"",142,034,"" +Mali,ML,MLI,466,ISO 3166-2:ML,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Malta,MT,MLT,470,ISO 3166-2:MT,Europe,Southern Europe,"",150,039,"" +Marshall Islands,MH,MHL,584,ISO 3166-2:MH,Oceania,Micronesia,"",009,057,"" +Martinique,MQ,MTQ,474,ISO 3166-2:MQ,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Mauritania,MR,MRT,478,ISO 3166-2:MR,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Mauritius,MU,MUS,480,ISO 3166-2:MU,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Mayotte,YT,MYT,175,ISO 3166-2:YT,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Mexico,MX,MEX,484,ISO 3166-2:MX,Americas,Latin America and the Caribbean,Central America,019,419,013 +Micronesia (Federated States of),FM,FSM,583,ISO 3166-2:FM,Oceania,Micronesia,"",009,057,"" +"Moldova, Republic of",MD,MDA,498,ISO 3166-2:MD,Europe,Eastern Europe,"",150,151,"" +Monaco,MC,MCO,492,ISO 3166-2:MC,Europe,Western Europe,"",150,155,"" +Mongolia,MN,MNG,496,ISO 3166-2:MN,Asia,Eastern Asia,"",142,030,"" +Montenegro,ME,MNE,499,ISO 3166-2:ME,Europe,Southern Europe,"",150,039,"" +Montserrat,MS,MSR,500,ISO 3166-2:MS,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Morocco,MA,MAR,504,ISO 3166-2:MA,Africa,Northern Africa,"",002,015,"" +Mozambique,MZ,MOZ,508,ISO 3166-2:MZ,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Myanmar,MM,MMR,104,ISO 3166-2:MM,Asia,South-eastern Asia,"",142,035,"" +Namibia,NA,NAM,516,ISO 3166-2:NA,Africa,Sub-Saharan Africa,Southern Africa,002,202,018 +Nauru,NR,NRU,520,ISO 3166-2:NR,Oceania,Micronesia,"",009,057,"" +Nepal,NP,NPL,524,ISO 3166-2:NP,Asia,Southern Asia,"",142,034,"" +Netherlands,NL,NLD,528,ISO 3166-2:NL,Europe,Western Europe,"",150,155,"" +New Caledonia,NC,NCL,540,ISO 3166-2:NC,Oceania,Melanesia,"",009,054,"" +New Zealand,NZ,NZL,554,ISO 3166-2:NZ,Oceania,Australia and New Zealand,"",009,053,"" +Nicaragua,NI,NIC,558,ISO 3166-2:NI,Americas,Latin America and the Caribbean,Central America,019,419,013 +Niger,NE,NER,562,ISO 3166-2:NE,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Nigeria,NG,NGA,566,ISO 3166-2:NG,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Niue,NU,NIU,570,ISO 3166-2:NU,Oceania,Polynesia,"",009,061,"" +Norfolk Island,NF,NFK,574,ISO 3166-2:NF,Oceania,Australia and New Zealand,"",009,053,"" +North Macedonia,MK,MKD,807,ISO 3166-2:MK,Europe,Southern Europe,"",150,039,"" +Northern Mariana Islands,MP,MNP,580,ISO 3166-2:MP,Oceania,Micronesia,"",009,057,"" +Norway,NO,NOR,578,ISO 3166-2:NO,Europe,Northern Europe,"",150,154,"" +Oman,OM,OMN,512,ISO 3166-2:OM,Asia,Western Asia,"",142,145,"" +Pakistan,PK,PAK,586,ISO 3166-2:PK,Asia,Southern Asia,"",142,034,"" +Palau,PW,PLW,585,ISO 3166-2:PW,Oceania,Micronesia,"",009,057,"" +"Palestine, State of",PS,PSE,275,ISO 3166-2:PS,Asia,Western Asia,"",142,145,"" +Panama,PA,PAN,591,ISO 3166-2:PA,Americas,Latin America and the Caribbean,Central America,019,419,013 +Papua New Guinea,PG,PNG,598,ISO 3166-2:PG,Oceania,Melanesia,"",009,054,"" +Paraguay,PY,PRY,600,ISO 3166-2:PY,Americas,Latin America and the Caribbean,South America,019,419,005 +Peru,PE,PER,604,ISO 3166-2:PE,Americas,Latin America and the Caribbean,South America,019,419,005 +Philippines,PH,PHL,608,ISO 3166-2:PH,Asia,South-eastern Asia,"",142,035,"" +Pitcairn,PN,PCN,612,ISO 3166-2:PN,Oceania,Polynesia,"",009,061,"" +Poland,PL,POL,616,ISO 3166-2:PL,Europe,Eastern Europe,"",150,151,"" +Portugal,PT,PRT,620,ISO 3166-2:PT,Europe,Southern Europe,"",150,039,"" +Puerto Rico,PR,PRI,630,ISO 3166-2:PR,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Qatar,QA,QAT,634,ISO 3166-2:QA,Asia,Western Asia,"",142,145,"" +Réunion,RE,REU,638,ISO 3166-2:RE,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Romania,RO,ROU,642,ISO 3166-2:RO,Europe,Eastern Europe,"",150,151,"" +Russian Federation,RU,RUS,643,ISO 3166-2:RU,Europe,Eastern Europe,"",150,151,"" +Rwanda,RW,RWA,646,ISO 3166-2:RW,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Saint Barthélemy,BL,BLM,652,ISO 3166-2:BL,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +"Saint Helena, Ascension and Tristan da Cunha",SH,SHN,654,ISO 3166-2:SH,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Saint Kitts and Nevis,KN,KNA,659,ISO 3166-2:KN,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Saint Lucia,LC,LCA,662,ISO 3166-2:LC,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Saint Martin (French part),MF,MAF,663,ISO 3166-2:MF,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Saint Pierre and Miquelon,PM,SPM,666,ISO 3166-2:PM,Americas,Northern America,"",019,021,"" +Saint Vincent and the Grenadines,VC,VCT,670,ISO 3166-2:VC,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Samoa,WS,WSM,882,ISO 3166-2:WS,Oceania,Polynesia,"",009,061,"" +San Marino,SM,SMR,674,ISO 3166-2:SM,Europe,Southern Europe,"",150,039,"" +Sao Tome and Principe,ST,STP,678,ISO 3166-2:ST,Africa,Sub-Saharan Africa,Middle Africa,002,202,017 +Saudi Arabia,SA,SAU,682,ISO 3166-2:SA,Asia,Western Asia,"",142,145,"" +Senegal,SN,SEN,686,ISO 3166-2:SN,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Serbia,RS,SRB,688,ISO 3166-2:RS,Europe,Southern Europe,"",150,039,"" +Seychelles,SC,SYC,690,ISO 3166-2:SC,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Sierra Leone,SL,SLE,694,ISO 3166-2:SL,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Singapore,SG,SGP,702,ISO 3166-2:SG,Asia,South-eastern Asia,"",142,035,"" +Sint Maarten (Dutch part),SX,SXM,534,ISO 3166-2:SX,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Slovakia,SK,SVK,703,ISO 3166-2:SK,Europe,Eastern Europe,"",150,151,"" +Slovenia,SI,SVN,705,ISO 3166-2:SI,Europe,Southern Europe,"",150,039,"" +Solomon Islands,SB,SLB,090,ISO 3166-2:SB,Oceania,Melanesia,"",009,054,"" +Somalia,SO,SOM,706,ISO 3166-2:SO,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +South Africa,ZA,ZAF,710,ISO 3166-2:ZA,Africa,Sub-Saharan Africa,Southern Africa,002,202,018 +South Georgia and the South Sandwich Islands,GS,SGS,239,ISO 3166-2:GS,Americas,Latin America and the Caribbean,South America,019,419,005 +South Sudan,SS,SSD,728,ISO 3166-2:SS,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Spain,ES,ESP,724,ISO 3166-2:ES,Europe,Southern Europe,"",150,039,"" +Sri Lanka,LK,LKA,144,ISO 3166-2:LK,Asia,Southern Asia,"",142,034,"" +Sudan,SD,SDN,729,ISO 3166-2:SD,Africa,Northern Africa,"",002,015,"" +Suriname,SR,SUR,740,ISO 3166-2:SR,Americas,Latin America and the Caribbean,South America,019,419,005 +Svalbard and Jan Mayen,SJ,SJM,744,ISO 3166-2:SJ,Europe,Northern Europe,"",150,154,"" +Sweden,SE,SWE,752,ISO 3166-2:SE,Europe,Northern Europe,"",150,154,"" +Switzerland,CH,CHE,756,ISO 3166-2:CH,Europe,Western Europe,"",150,155,"" +Syrian Arab Republic,SY,SYR,760,ISO 3166-2:SY,Asia,Western Asia,"",142,145,"" +"Taiwan, Province of China",TW,TWN,158,ISO 3166-2:TW,Asia,Eastern Asia,"",142,030,"" +Tajikistan,TJ,TJK,762,ISO 3166-2:TJ,Asia,Central Asia,"",142,143,"" +"Tanzania, United Republic of",TZ,TZA,834,ISO 3166-2:TZ,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Thailand,TH,THA,764,ISO 3166-2:TH,Asia,South-eastern Asia,"",142,035,"" +Timor-Leste,TL,TLS,626,ISO 3166-2:TL,Asia,South-eastern Asia,"",142,035,"" +Togo,TG,TGO,768,ISO 3166-2:TG,Africa,Sub-Saharan Africa,Western Africa,002,202,011 +Tokelau,TK,TKL,772,ISO 3166-2:TK,Oceania,Polynesia,"",009,061,"" +Tonga,TO,TON,776,ISO 3166-2:TO,Oceania,Polynesia,"",009,061,"" +Trinidad and Tobago,TT,TTO,780,ISO 3166-2:TT,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Tunisia,TN,TUN,788,ISO 3166-2:TN,Africa,Northern Africa,"",002,015,"" +Turkey,TR,TUR,792,ISO 3166-2:TR,Asia,Western Asia,"",142,145,"" +Turkmenistan,TM,TKM,795,ISO 3166-2:TM,Asia,Central Asia,"",142,143,"" +Turks and Caicos Islands,TC,TCA,796,ISO 3166-2:TC,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Tuvalu,TV,TUV,798,ISO 3166-2:TV,Oceania,Polynesia,"",009,061,"" +Uganda,UG,UGA,800,ISO 3166-2:UG,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Ukraine,UA,UKR,804,ISO 3166-2:UA,Europe,Eastern Europe,"",150,151,"" +United Arab Emirates,AE,ARE,784,ISO 3166-2:AE,Asia,Western Asia,"",142,145,"" +United Kingdom of Great Britain and Northern Ireland,GB,GBR,826,ISO 3166-2:GB,Europe,Northern Europe,"",150,154,"" +United States of America,US,USA,840,ISO 3166-2:US,Americas,Northern America,"",019,021,"" +United States Minor Outlying Islands,UM,UMI,581,ISO 3166-2:UM,Oceania,Micronesia,"",009,057,"" +Uruguay,UY,URY,858,ISO 3166-2:UY,Americas,Latin America and the Caribbean,South America,019,419,005 +Uzbekistan,UZ,UZB,860,ISO 3166-2:UZ,Asia,Central Asia,"",142,143,"" +Vanuatu,VU,VUT,548,ISO 3166-2:VU,Oceania,Melanesia,"",009,054,"" +Venezuela (Bolivarian Republic of),VE,VEN,862,ISO 3166-2:VE,Americas,Latin America and the Caribbean,South America,019,419,005 +Viet Nam,VN,VNM,704,ISO 3166-2:VN,Asia,South-eastern Asia,"",142,035,"" +Virgin Islands (British),VG,VGB,092,ISO 3166-2:VG,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Virgin Islands (U.S.),VI,VIR,850,ISO 3166-2:VI,Americas,Latin America and the Caribbean,Caribbean,019,419,029 +Wallis and Futuna,WF,WLF,876,ISO 3166-2:WF,Oceania,Polynesia,"",009,061,"" +Western Sahara,EH,ESH,732,ISO 3166-2:EH,Africa,Northern Africa,"",002,015,"" +Yemen,YE,YEM,887,ISO 3166-2:YE,Asia,Western Asia,"",142,145,"" +Zambia,ZM,ZMB,894,ISO 3166-2:ZM,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 +Zimbabwe,ZW,ZWE,716,ISO 3166-2:ZW,Africa,Sub-Saharan Africa,Eastern Africa,002,202,014 diff --git a/src/lisfloodutilities/water-demand-historic/ancillary_data/us-state-ansi-fips.csv b/src/lisfloodutilities/water-demand-historic/ancillary_data/us-state-ansi-fips.csv new file mode 100644 index 0000000..1785550 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/ancillary_data/us-state-ansi-fips.csv @@ -0,0 +1,52 @@ +stname,st,stusps +Alabama,01,AL +Alaska,02,AK +Arizona,04,AZ +Arkansas,05,AR +California,06,CA +Colorado,08,CO +Connecticut,09,CT +Delaware,10,DE +District of Columbia,11,DC +Florida,12,FL +Georgia,13,GA +Hawaii,15,HI +Idaho,16,ID +Illinois,17,IL +Indiana,18,IN +Iowa,19,IA +Kansas,20,KS +Kentucky,21,KY +Louisiana,22,LA +Maine,23,ME +Maryland,24,MD +Massachusetts,25,MA +Michigan,26,MI +Minnesota,27,MN +Mississippi,28,MS +Missouri,29,MO +Montana,30,MT +Nebraska,31,NE +Nevada,32,NV +New Hampshire,33,NH +New Jersey,34,NJ +New Mexico,35,NM +New York,36,NY +North Carolina,37,NC +North Dakota,38,ND +Ohio,39,OH +Oklahoma,40,OK +Oregon,41,OR +Pennsylvania,42,PA +Rhode Island,44,RI +South Carolina,45,SC +South Dakota,46,SD +Tennessee,47,TN +Texas,48,TX +Utah,49,UT +Vermont,50,VT +Virginia,51,VA +Washington,53,WA +West Virginia,54,WV +Wisconsin,55,WI +Wyoming,56,WY \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/config_Europe_EFAS_1arcmin.cfg b/src/lisfloodutilities/water-demand-historic/config_Europe_EFAS_1arcmin.cfg new file mode 100755 index 0000000..d63f590 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/config_Europe_EFAS_1arcmin.cfg @@ -0,0 +1,22 @@ +year_start = 1979 +year_end = 2021 +templatemap_path = /eos/jeodpp/data/projects/WEFE/shared/area_European_01min.nc +output_folder = /eos/jeodpp/data/projects/WEFE/shared/lisflood-water-demand-historic-Europe-1arcmin +ancillary_data_folder = ancillary_data +aquastat_folder = /eos/jeodpp/data/projects/WEFE/shared/FAO_AQUASTAT +eia_folder = /eos/jeodpp/data/projects/WEFE/shared/EIA_electricity_database +gcam_folder = /eos/jeodpp/data/projects/WEFE/shared/GCAM_V5.4_output +ghsl_folder = /eos/jeodpp/data/projects/WEFE/shared/GHS-POP +glw_folder = /eos/jeodpp/data/projects/WEFE/shared/Gridded_Livestock_of_the_World_2010_GLW_3 +huang_folder = /eos/jeodpp/data/projects/WEFE/shared/huang_2018_water_use +htap_folder = /eos/jeodpp/data/projects/WEFE/shared/OMI-HTAP +lohrmann_power_plant_db_folder = /eos/jeodpp/data/projects/WEFE/shared/lohrmann_power_plant_database +mswx_folder = /eos/jeodpp/data/projects/WEFE/shared/MSWX_V100 +pku_fuel_folder = /eos/jeodpp/data/projects/WEFE/shared/PKU-FUEL +un_icsd_folder = /eos/jeodpp/data/projects/WEFE/shared/UN_Industrial_Commodity_Stats +us_states_folder = /eos/jeodpp/data/projects/WEFE/shared/cb_2018_us_state_500k +usgs_water_use_folder = /eos/jeodpp/data/projects/WEFE/shared/USGS_Water_Use_Data +vassolo_doll_folder = industrial_water_use_data_Vassolo_Döll +world_borders_folder = /eos/jeodpp/data/projects/WEFE/shared/TM_WORLD_BORDERS +world_bank_folder = /eos/jeodpp/data/projects/WEFE/shared/World_Bank_data +wri_power_plant_db_folder = /eos/jeodpp/data/projects/WEFE/shared/WRI_power_plant_database \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/config_global_0p1deg.cfg b/src/lisfloodutilities/water-demand-historic/config_global_0p1deg.cfg new file mode 100644 index 0000000..b7de158 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/config_global_0p1deg.cfg @@ -0,0 +1,22 @@ +year_start = 1979 +year_end = 2021 +templatemap_path = /eos/jeodpp/data/projects/WEFE/shared/areaOrigin.nc +output_folder = /eos/jeodpp/data/projects/WEFE/shared/lisflood-water-demand-historic_global_0p1deg +ancillary_data_folder = ancillary_data +aquastat_folder = /eos/jeodpp/data/projects/WEFE/shared/FAO_AQUASTAT +eia_folder = /eos/jeodpp/data/projects/WEFE/shared/EIA_electricity_database +gcam_folder = /eos/jeodpp/data/projects/WEFE/shared/GCAM_V5.4_output +ghsl_folder = /eos/jeodpp/data/projects/WEFE/shared/GHS-POP +glw_folder = /eos/jeodpp/data/projects/WEFE/shared/Gridded_Livestock_of_the_World_2010_GLW_3 +huang_folder = /eos/jeodpp/data/projects/WEFE/shared/huang_2018_water_use +htap_folder = /eos/jeodpp/data/projects/WEFE/shared/OMI-HTAP +lohrmann_power_plant_db_folder = /eos/jeodpp/data/projects/WEFE/shared/lohrmann_power_plant_database +mswx_folder = /eos/jeodpp/data/projects/WEFE/shared/MSWX_V100 +pku_fuel_folder = /eos/jeodpp/data/projects/WEFE/shared/PKU-FUEL +un_icsd_folder = /eos/jeodpp/data/projects/WEFE/shared/UN_Industrial_Commodity_Stats +us_states_folder = /eos/jeodpp/data/projects/WEFE/shared/cb_2018_us_state_500k +usgs_water_use_folder = /eos/jeodpp/data/projects/WEFE/shared/USGS_Water_Use_Data +vassolo_doll_folder = industrial_water_use_data_Vassolo_Döll +world_borders_folder = /eos/jeodpp/data/projects/WEFE/shared/TM_WORLD_BORDERS +world_bank_folder = /eos/jeodpp/data/projects/WEFE/shared/World_Bank_data +wri_power_plant_db_folder = /eos/jeodpp/data/projects/WEFE/shared/WRI_power_plant_database \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/ArcView_file_industry_paper_vassolo&doell.doc b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/ArcView_file_industry_paper_vassolo&doell.doc new file mode 100644 index 0000000..40c138a Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/ArcView_file_industry_paper_vassolo&doell.doc differ diff --git "a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industrial_water_use_data_Vassolo_D\303\266ll.zip" "b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industrial_water_use_data_Vassolo_D\303\266ll.zip" new file mode 100644 index 0000000..50c6952 Binary files /dev/null and "b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industrial_water_use_data_Vassolo_D\303\266ll.zip" differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.dbf b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.dbf new file mode 100644 index 0000000..e8223d4 Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.dbf differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbn b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbn new file mode 100644 index 0000000..42a35af Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbn differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbx b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbx new file mode 100644 index 0000000..54cc7f7 Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.sbx differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shp b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shp new file mode 100644 index 0000000..502ab2b Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shp differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shx b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shx new file mode 100644 index 0000000..7e0c420 Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/industry_paper_vassolo&doell.shx differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/manuf_wwd.tif b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/manuf_wwd.tif new file mode 100644 index 0000000..1dbf25e Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/manuf_wwd.tif differ diff --git a/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/wwd_ps.tif b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/wwd_ps.tif new file mode 100644 index 0000000..9b16ddc Binary files /dev/null and b/src/lisfloodutilities/water-demand-historic/industrial_water_use_data_Vassolo_Doll/wwd_ps.tif differ diff --git a/src/lisfloodutilities/water-demand-historic/requirements.txt b/src/lisfloodutilities/water-demand-historic/requirements.txt new file mode 100644 index 0000000..d7818c3 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/requirements.txt @@ -0,0 +1,189 @@ +# This file may be used to create an environment using: +# $ conda create --name --file +# platform: win-64 +affine=2.3.0=py_0 +appdirs=1.4.4=pyh9f0ad1d_0 +attrs=21.2.0=pyhd8ed1ab_0 +blosc=1.21.0=h0e60522_0 +boost-cpp=1.74.0=h5b4e17d_5 +branca=0.4.2=pyhd8ed1ab_0 +brotli=1.0.9=h8ffe710_6 +brotli-bin=1.0.9=h8ffe710_6 +brotlipy=0.7.0=py39hb82d6ee_1003 +bzip2=1.0.8=h8ffe710_4 +c-blosc2=2.0.4=h09319c2_1 +ca-certificates=2021.10.8=h5b45459_0 +cached-property=1.5.2=hd8ed1ab_1 +cached_property=1.5.2=pyha770c72_1 +cairo=1.16.0=hb19e0ff_1008 +certifi=2021.10.8=py39hcbf5309_1 +cffi=1.15.0=py39h0878f49_0 +cfitsio=4.0.0=hd67004f_0 +cftime=1.5.1.1=py39h5d4886f_1 +charls=2.2.0=h39d44d4_0 +charset-normalizer=2.0.8=pyhd8ed1ab_0 +click=8.0.3=py39hcbf5309_1 +click-plugins=1.1.1=py_0 +cligj=0.7.2=pyhd8ed1ab_1 +cloudpickle=2.0.0=pyhd8ed1ab_0 +colorama=0.4.4=pyh9f0ad1d_0 +configparser=5.0.1=py_0 +cryptography=36.0.0=py39h7bc7c5c_0 +curl=7.80.0=h789b8ee_0 +cycler=0.11.0=pyhd8ed1ab_0 +cytoolz=0.11.2=py39hb82d6ee_1 +dask-core=2021.11.2=pyhd8ed1ab_0 +et_xmlfile=1.0.1=py_1001 +expat=2.4.1=h39d44d4_0 +fiona=1.8.20=py39hea8b339_2 +folium=0.12.1.post1=pyhd8ed1ab_0 +font-ttf-dejavu-sans-mono=2.37=hab24e00_0 +font-ttf-inconsolata=3.000=h77eed37_0 +font-ttf-source-code-pro=2.038=h77eed37_0 +font-ttf-ubuntu=0.83=hab24e00_0 +fontconfig=2.13.1=h1989441_1005 +fonts-conda-ecosystem=1=0 +fonts-conda-forge=1=0 +fonttools=4.28.2=py39hb82d6ee_0 +freetype=2.10.4=h546665d_1 +freexl=1.0.6=ha8e266a_0 +fsspec=2021.11.1=pyhd8ed1ab_0 +gdal=3.3.3=py39h3f5efd6_2 +geographiclib=1.52=pyhd8ed1ab_0 +geopandas=0.10.2=pyhd8ed1ab_1 +geopandas-base=0.10.2=pyha770c72_1 +geopy=2.2.0=pyhd8ed1ab_0 +geos=3.10.0=h39d44d4_0 +geotiff=1.7.0=h350af67_3 +gettext=0.19.8.1=ha2e2712_1008 +giflib=5.2.1=h8d14728_2 +h5py=3.6.0=nompi_py39hd4deaf1_100 +hdf4=4.2.15=h0e5069d_3 +hdf5=1.12.1=nompi_h2a0e4a3_102 +icu=68.2=h0e60522_0 +idna=3.1=pyhd3deb0d_0 +imagecodecs=2021.11.20=py39he391c9c_1 +imageio=2.9.0=py_0 +intel-openmp=2021.4.0=h57928b3_3556 +jbig=2.1=h8d14728_2003 +jinja2=3.0.3=pyhd8ed1ab_0 +joblib=1.1.0=pyhd8ed1ab_0 +jpeg=9d=he774522_0 +jxrlib=1.1=hfa6e2cd_2 +kealib=1.4.14=h8995ca9_3 +kiwisolver=1.3.2=py39h2e07f2f_1 +krb5=1.19.2=h20d022d_3 +lcms2=2.12=h2a16943_0 +lerc=3.0=h0e60522_0 +libaec=1.0.6=h39d44d4_0 +libblas=3.9.0=12_win64_mkl +libbrotlicommon=1.0.9=h8ffe710_6 +libbrotlidec=1.0.9=h8ffe710_6 +libbrotlienc=1.0.9=h8ffe710_6 +libcblas=3.9.0=12_win64_mkl +libclang=11.1.0=default_h5c34c98_1 +libcurl=7.80.0=h789b8ee_0 +libdeflate=1.8=h8ffe710_0 +libffi=3.4.2=h8ffe710_5 +libgdal=3.3.3=he0d6347_2 +libglib=2.70.1=h3be07f2_0 +libiconv=1.16=he774522_0 +libkml=1.3.0=h9859afa_1014 +liblapack=3.9.0=12_win64_mkl +libnetcdf=4.8.1=nompi_h1cc8e9d_101 +libpng=1.6.37=ha81a0f5_2 +libpq=13.5=hfcc5ef8_0 +librttopo=1.1.0=h22ffb85_7 +libspatialindex=1.9.3=h39d44d4_4 +libspatialite=5.0.1=h9c61790_10 +libssh2=1.10.0=h680486a_2 +libtiff=4.3.0=hd413186_2 +libwebp-base=1.2.1=h8ffe710_0 +libxml2=2.9.12=hf5bbc77_1 +libzip=1.8.0=hfed4ece_1 +libzlib=1.2.11=h8ffe710_1013 +libzopfli=1.0.3=ha925a31_0 +locket=0.2.0=py_2 +lz4-c=1.9.3=h8ffe710_1 +m2w64-gcc-libgfortran=5.3.0=6 +m2w64-gcc-libs=5.3.0=7 +m2w64-gcc-libs-core=5.3.0=7 +m2w64-gmp=6.1.0=2 +m2w64-libwinpthread-git=5.0.0.4634.697f757=2 +mapclassify=2.4.3=pyhd8ed1ab_0 +markupsafe=2.0.1=py39hb82d6ee_1 +matplotlib=3.5.0=py39hcbf5309_0 +matplotlib-base=3.5.0=py39h581301d_0 +mkl=2021.4.0=h0e2418a_729 +msys2-conda-epoch=20160418=1 +munch=2.5.0=py_0 +munkres=1.1.4=pyh9f0ad1d_0 +netcdf4=1.5.8=nompi_py39hf113b1f_101 +networkx=2.6.3=pyhd8ed1ab_1 +numpy=1.21.4=py39h6635163_0 +olefile=0.46=pyh9f0ad1d_1 +openjpeg=2.4.0=hb211442_1 +openpyxl=3.0.9=pyhd8ed1ab_0 +openssl=1.1.1l=h8ffe710_0 +packaging=21.3=pyhd8ed1ab_0 +pandas=1.3.4=py39h2e25243_1 +partd=1.2.0=pyhd8ed1ab_0 +pcre=8.45=h0e60522_0 +pillow=8.4.0=py39h916092e_0 +pip=21.3.1=pyhd8ed1ab_0 +pixman=0.40.0=h8ffe710_0 +pooch=1.5.2=pyhd8ed1ab_0 +poppler=21.09.0=h24fffdf_3 +poppler-data=0.4.11=hd8ed1ab_0 +postgresql=13.5=h1c22c4f_0 +proj=8.1.1=h1cfcee9_2 +pycparser=2.21=pyhd8ed1ab_0 +pyopenssl=21.0.0=pyhd8ed1ab_0 +pyparsing=3.0.6=pyhd8ed1ab_0 +pyproj=3.2.1=py39h39b2389_2 +pyqt=5.12.3=py39hcbf5309_8 +pyqt-impl=5.12.3=py39h415ef7b_8 +pyqt5-sip=4.19.18=py39h415ef7b_8 +pyqtchart=5.12=py39h415ef7b_8 +pyqtwebengine=5.12.1=py39h415ef7b_8 +pysocks=1.7.1=py39hcbf5309_4 +python=3.9.7=h7840368_3_cpython +python-dateutil=2.8.2=pyhd8ed1ab_0 +python_abi=3.9=2_cp39 +pytz=2021.3=pyhd8ed1ab_0 +pywavelets=1.2.0=py39h5d4886f_1 +pyyaml=6.0=py39hb82d6ee_3 +qt=5.12.9=h5909a2a_4 +rasterio=1.2.10=py39h20dd13d_0 +requests=2.26.0=pyhd8ed1ab_1 +rtree=0.9.7=py39h09fdee3_3 +scikit-image=0.18.3=py39h2e25243_1 +scikit-learn=1.0.1=py39he931e04_2 +scipy=1.7.3=py39hc0c34ad_0 +setuptools=59.4.0=py39hcbf5309_0 +shapely=1.8.0=py39h2358463_2 +six=1.16.0=pyh6c4a22f_0 +snappy=1.1.8=ha925a31_3 +snuggs=1.4.7=py_0 +sqlite=3.37.0=h8ffe710_0 +tbb=2021.4.0=h2d74725_1 +threadpoolctl=3.0.0=pyh8a188c0_0 +tifffile=2021.11.2=pyhd8ed1ab_0 +tiledb=2.3.4=h78dabda_0 +tk=8.6.11=h8ffe710_1 +toolz=0.11.2=pyhd8ed1ab_0 +tornado=6.1=py39hb82d6ee_2 +tzdata=2021e=he74cb21_0 +ucrt=10.0.20348.0=h57928b3_0 +urllib3=1.26.7=pyhd8ed1ab_0 +vc=14.2=hc4473a8_5 +vs2015_runtime=14.29.30037=h902a5da_5 +wheel=0.37.0=pyhd8ed1ab_1 +win_inet_pton=1.1.0=py39hcbf5309_3 +xerces-c=3.2.3=h0e60522_4 +xyzservices=2021.11.0=pyhd8ed1ab_0 +xz=5.2.5=h62dcd97_1 +yaml=0.2.5=he774522_0 +zfp=0.5.5=h0e60522_8 +zlib=1.2.11=vc14h1cdd9ab_1 +zstd=1.5.0=h6255e5f_0 diff --git a/src/lisfloodutilities/water-demand-historic/step1_population_density.py b/src/lisfloodutilities/water-demand-historic/step1_population_density.py new file mode 100644 index 0000000..05d4c8c --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/step1_population_density.py @@ -0,0 +1,220 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +""" +Copyright 2019-2023 European Union +Licensed under the EUPL, Version 1.2 or as soon they will be approved by the European Commission subsequent versions of the EUPL (the "Licence"); +You may not use this work except in compliance with the Licence. +You may obtain a copy of the Licence at: +https://joinup.ec.europa.eu/sites/default/files/inline-files/EUPL%20v1_2%20EN(1).txt +Unless required by applicable law or agreed to in writing, software distributed under the Licence is distributed on an "AS IS" basis, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the Licence for the specific language governing permissions and limitations under the Licence. + +""" +__author__ = "Hylke E. Beck" +__date__ = "January 2022" + +import os, sys, glob, time, pdb +import pandas as pd +import numpy as np +from netCDF4 import Dataset +import matplotlib.pyplot as plt +from tools import * +import rasterio + +# Load configuration file +config = load_config(sys.argv[1]) + +def main(): + print('===============================================================================') + + # Create output folder + if os.path.isdir(os.path.join(config['output_folder'],'step1_population_density'))==False: + os.makedirs(os.path.join(config['output_folder'],'step1_population_density')) + + # Load template map + dset = Dataset(config['templatemap_path']) + template_lat = dset.variables['lat'][:] + template_lon = dset.variables['lon'][:] + template_res = template_lon[1]-template_lon[0] + varname = list(dset.variables.keys())[-1] + template_np = np.array(dset.variables[varname][:]) + + # Determine map sizes + mapsize_global = (np.round(180/template_res).astype(int),np.round(360/template_res).astype(int)) + mapsize_template = template_np.shape + row_upper,col_left = latlon2rowcol(template_lat[0],template_lon[0],template_res,90,-180) + + # Compute area for each grid-cell (includes correction because lat/lon + # values in templates are often not rounded... + xi, yi = np.meshgrid(np.arange(-180+template_res/2,180+template_res/2,template_res), np.arange(90-template_res/2,-90-template_res/2,-template_res)) + if yi.shape[0]>np.round(180/template_res): + yi, xi = yi[:-1,:], xi[:-1,:] + if yi.shape[1]>np.round(360/template_res): + yi, xi = yi[:,:-1], xi[:,:-1] + area_map = (40075*template_res/360)**2*np.cos(np.deg2rad(yi)) + + # List of years with population data + pop_folders = sorted(glob.glob(os.path.join(config['ghsl_folder'],'*'))) + pop_years = np.array([int(os.path.basename(pop_folder)[9:13]) for pop_folder in pop_folders]) + + # List of years + years = np.arange(config['year_start'],config['year_end']+1).astype(int) + + + ############################################################################ + # Load GHSL population data and upscale to template map resolution + ############################################################################ + + print('-------------------------------------------------------------------------------') + t0 = time.time() + + pop_upscaled = np.zeros((mapsize_global[0],mapsize_global[1],len(pop_years)),dtype=np.single) + for ii in np.arange(len(pop_years)): + print('Loading and upscaling '+str(pop_years[ii])+' population data') + + # Load raw data (total population per grid-cell) + # Factor 16 to account for resampling from 0.0025 to 0.01 degrees + src = rasterio.open(os.path.join(config['ghsl_folder'],'GHS_POP_E'+str(pop_years[ii])+'_GLOBE_R2019A_4326_9ss_V1_0','GHS_POP_E'+str(pop_years[ii])+'_GLOBE_R2019A_4326_9ss_V1_0_reprojected.tif')) + pop_data = src.read(1).astype(np.single)*16 + pop_data[pop_data<0] = 0 + refmatrix = src.get_transform() + src.close() + + # Make fully global + add_top = int(np.round((90-refmatrix[3])/-refmatrix[5])) + add_bottom = int(180/-refmatrix[5]-pop_data.shape[0]-add_top) + pop_shape = (int(pop_data.shape[1]/2),pop_data.shape[1]) + pop_raw = np.zeros(pop_shape,dtype=np.single) + pop_raw[add_top:pop_shape[0]-add_bottom,:] = pop_data + + # Area map + res = 180/pop_raw.shape[0] + _, yi = np.meshgrid(np.arange(-180+res/2,180+res/2,res), np.arange(90-res/2,-90-res/2,-res)) + yi = yi.astype(np.single) + ghsl_area_map = (40075*res/360)**2*np.cos(np.deg2rad(yi)) + + # Convert to density and upscale + pop_raw = pop_raw/ghsl_area_map + pop_map = imresize_mean(pop_raw,mapsize_global) + pop_upscaled[:,:,ii] = pop_map + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Temporally inter- and extrapolate population data + ############################################################################ + + print('-------------------------------------------------------------------------------') + t0 = time.time() + + pop_big = np.zeros((mapsize_global[0],mapsize_global[1],len(years)),dtype=np.single) + for ii in np.arange(len(years)): + print('Calculating '+str(years[ii])+' population') + + # Extrapolate into past + if years[ii]np.max(pop_years): + annual_change = (pop_upscaled[:,:,-1]-pop_upscaled[:,:,-2])/(pop_years[-1]-pop_years[-2]) + number_of_years = years[ii]-pop_years[-1] + pop_big[:,:,ii] = pop_upscaled[:,:,-1]+annual_change*number_of_years + + # Year is included in GHSL, no inter- or extrapolation necessary + elif years[ii] in pop_years: + index = np.argsort(np.abs(pop_years-years[ii]))[0] + pop_big[:,:,ii] = pop_upscaled[:,:,index] + + # Interpolate + else: + index = np.argwhere(pop_yearsnp.round(180/template_res): + yi, xi = yi[:-1,:], xi[:-1,:] + if yi.shape[1]>np.round(360/template_res): + yi, xi = yi[:,:-1], xi[:,:-1] + area_map = (40075*template_res/360)**2*np.cos(np.deg2rad(yi)) + + # List of years + years = np.arange(config['year_start'],config['year_end']+1).astype(int) + + # List of countries and US states + country_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'un_country_codes.csv')) + state_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'us-state-ansi-fips.csv')) + + + ############################################################################ + # Load and reproject Huang et al. (2018) water demand data (only used for + # comparison) + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and reprojecting Huang et al. (2018) water demand data') + t0 = time.time() + + dset = Dataset(os.path.join(config['huang_folder'],'withd_dom.nc')) + raw_data = np.array(dset.variables['withd_dom'][:]) + raw_lat = np.array(dset.variables['lat'][:]) + raw_lon = np.array(dset.variables['lon'][:]) + dset.close() + + Huang_withdrawal = np.zeros((360,720,raw_data.shape[0]),dtype=np.single)*np.NaN + rows,cols = latlon2rowcol(raw_lat,raw_lon,0.5,90,-180) + for ii in np.arange(raw_data.shape[0]): + reprojected = np.zeros((360,720),dtype=np.single)*np.NaN + reprojected[rows,cols] = raw_data[ii,:] + Huang_withdrawal[:,:,ii] = reprojected + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load country border raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling country border raster') + t0 = time.time() + country_code_map = load_country_code_map(os.path.join(config['world_borders_folder'],'TM_WORLD_BORDERS_UN_rasterized.tif'),mapsize_global) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load US states raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling US state border raster') + t0 = time.time() + state_code_map = load_us_state_code_map(os.path.join(config['us_states_folder'],'cb_2018_us_state_500k_rasterized.tif'),mapsize_global) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Map of R parameter: measures the relative difference of domestic water + # withdrawal between the warmest and coldest months in a given year. + # Values from Huang et al. (2018). + ############################################################################ + + R_map = np.zeros(mapsize_global,dtype=np.single)+0.45 + R_map[country_code_map==124] = 0.36 # Canada + R_map[country_code_map==840] = 0.52 # US + R_map[country_code_map==36] = 0.8 # Australia + R_map[country_code_map==356] = 0.29 # India + R_map[country_code_map==156] = 0.2 # China + R_map[country_code_map==392] = 0.1 # Japan + R_map[country_code_map==724] = 0.1 # Spain + + + ############################################################################ + # Compute time series of population for each country + ############################################################################ + + filepath = os.path.join(config['output_folder'],'step2_domestic_demand','tables','population.csv') + + if os.path.isfile(filepath): + table_population = pd.read_csv(filepath,index_col=0).values + + else: + print('-------------------------------------------------------------------------------') + print('Computating population for each country and year') + t0 = time.time() + + table_population = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + for ii in np.arange(len(years)): + print(str(years[ii])) + npz = np.load(os.path.join(config['output_folder'],'step1_population_density',str(years[ii])+'.npz')) + pop_map = npz['data'] + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + mask = country_code_map==country_code + table_population[jj,ii] = np.sum(pop_map[mask]*area_map[mask])+1 # +1 to avoid divide by zero + + pd.DataFrame(table_population,index=country_codes['name'],columns=years).to_csv(filepath) + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Temporally inter- and extrapolate AQUASTAT data using population + # estimates + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Temporally inter- and extrapolating AQUASTAT data') + t0 = time.time() + + aquastat = pd.read_csv(os.path.join(config['aquastat_folder'],'aquastat_clean.csv'),index_col=False) + + table_aquastat_domestic_withdrawal_interp = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + + # Find location of data in AQUASTAT table + sel = np.where((aquastat['Area Id']==country_code) & (aquastat['Variable Name']=='Municipal water withdrawal'))[0] + if len(sel)==0: + continue + + fao_years = aquastat['Year'][sel].values + fao_withdrawals = aquastat['Value'][sel].values # km3 + keep = ~np.isnan(fao_withdrawals) + fao_years = fao_years[keep] + fao_withdrawals = fao_withdrawals[keep] + if len(fao_withdrawals)==0: + continue + + # Interpolate between years + table_aquastat_domestic_withdrawal_interp[jj,:] = np.interp(years,fao_years,fao_withdrawals) # km3/year + + # If necessary, extrapolate backwards based on population + if fao_years[0]>years[0]: + ind = np.argwhere(years==fao_years[0])[0][0] + table_aquastat_domestic_withdrawal_interp[jj,:ind] = table_population[jj,:ind]*table_aquastat_domestic_withdrawal_interp[jj,ind]/table_population[jj,ind] + + # If necessary, extrapolate forwards based on population + if fao_years[-1]1.2): + print('Considerable discrepancy between Lohrmann et al. (2019) and EIA plant capacity (factor '+str(corr_fac)+')') + + # Compute thermoelectric withdrawal time series based on EIA power generation time series + withdrawal_thermoelectric_2015 = table_lohrmann_plant_withdrawal[jj]*corr_fac + table_withdrawal_thermoelectric[jj,:] = table_eia_electricity_generation[jj,:]*withdrawal_thermoelectric_2015/table_eia_electricity_generation[jj,ii_2015] + + # Compute manufacturing withdrawal time series based on MVA + kw = dict(method="slinear", fill_value="extrapolate", limit_direction="both") + aquastat_industry_withdrawal_2015 = pd.Series(table_aquastat_industry_withdrawal[jj,:]).interpolate(**kw).values[ii_2015] + withdrawal_manufacturing_2015 = aquastat_industry_withdrawal_2015-withdrawal_thermoelectric_2015 + table_withdrawal_manufacturing[jj,:] = table_worldbank_mva[jj,:]*withdrawal_manufacturing_2015/table_worldbank_mva[jj,ii_2015] + ''' + + AQUASTAT_nvals = np.sum(np.isnan(table_aquastat_industry_withdrawal[jj,:])==False) + GCAM_nvals = np.sum(np.isnan(table_gcam_withdrawal_industry[jj,:])==False) + + #-------------------------------------------------------------------------- + # If AQUASTAT and GCAM available + #-------------------------------------------------------------------------- + + if (AQUASTAT_nvals>1) & (GCAM_nvals>0): + table_withdrawal_data_source[jj] = 'AQUASTAT+GCAM' + + # Interpolate AQUASTAT water demand (no extrapolation) + table_withdrawal_industry[jj,:] = pd.Series(table_aquastat_industry_withdrawal[jj,:]).interpolate(method="slinear").values + + # Fill AQUASTAT gaps using MVA + ts_mva = table_worldbank_mva[jj,:] + ts_ind = table_withdrawal_industry[jj,:].copy() + if np.sum(np.isnan(ts_mva)==False)>1: + ts_mva = pd.Series(ts_mva).interpolate(method="slinear").values + indices = np.where(np.isnan(ts_ind)==False)[0] + ts_ind[:indices[0]] = ts_mva[:indices[0]]*ts_ind[indices[0]]/ts_mva[indices[0]] + ts_ind[indices[-1]+1:] = ts_mva[indices[-1]+1:]*ts_ind[indices[-1]]/ts_mva[indices[-1]] + table_withdrawal_industry[jj,:] = ts_ind.copy() + + # Fill remaining AQUASTAT gaps using population + ts_ind = table_withdrawal_industry[jj,:] + ts_pop = table_population[jj,:] + if np.sum(np.isnan(ts_ind))>0: + indices = np.where(np.isnan(ts_ind)==False)[0] + ts_ind[:indices[0]] = ts_pop[:indices[0]]*ts_ind[indices[0]]/(ts_pop[indices[0]]+1) # +1 to avoid divide by zero + ts_ind[indices[-1]+1:] = ts_pop[indices[-1]+1:]*ts_ind[indices[-1]]/(ts_pop[indices[-1]]+1) # +1 to avoid divide by zero + table_withdrawal_industry[jj,:] = ts_ind.copy() + + # Linearly interpolate and nearest-neighbor extrapolate GCAM data + ts_gcam_withdrawal_industry = pd.Series(table_gcam_withdrawal_industry[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values + ts_gcam_withdrawal_thermoelectric = pd.Series(table_gcam_withdrawal_thermoelectric[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values + ts_gcam_withdrawal_manufacturing = ts_gcam_withdrawal_industry-ts_gcam_withdrawal_thermoelectric + + # Rescale GCAM to Vassolo and Doll (2005), if not missing and if country large + # enough (i.e., covers enough grid cells) + country_ncells = np.sum(country_code_map_360x720==country_code) + if (np.isnan(vassolo_doll_withdrawal_thermoelectric[jj]+vassolo_doll_withdrawal_manufacturing[jj])==False) & (country_ncells>20): + ts_gcam_withdrawal_thermoelectric = ts_gcam_withdrawal_thermoelectric*vassolo_doll_withdrawal_thermoelectric[jj]/(ts_gcam_withdrawal_thermoelectric[ii_1995]+10**-6) # +10**-6 to avoid divide by zero + ts_gcam_withdrawal_manufacturing = ts_gcam_withdrawal_manufacturing*vassolo_doll_withdrawal_manufacturing[jj]/(ts_gcam_withdrawal_manufacturing[ii_1995]+10**-6) # +10**-6 to avoid divide by zero + ts_gcam_withdrawal_industry = ts_gcam_withdrawal_thermoelectric+ts_gcam_withdrawal_manufacturing + + # Temporally disaggregate industrial into manufacturing and thermoelectric using GCAM + table_withdrawal_thermoelectric[jj,:] = table_withdrawal_industry[jj,:]*ts_gcam_withdrawal_thermoelectric/(ts_gcam_withdrawal_industry+10**-6) # +10**-6 to avoid divide by zero + table_withdrawal_manufacturing[jj,:] = (table_withdrawal_industry[jj,:]+10**-6)-table_withdrawal_thermoelectric[jj,:] + + + #-------------------------------------------------------------------------- + # If AQUASTAT not available, use GCAM + #-------------------------------------------------------------------------- + + elif (AQUASTAT_nvals<=1) & (GCAM_nvals>0): + table_withdrawal_data_source[jj] = 'GCAM' + + # Linearly interpolate and nearest-neighbor extrapolate GCAM data + table_withdrawal_industry[jj,:] = pd.Series(table_gcam_withdrawal_industry[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values + table_withdrawal_thermoelectric[jj,:] = pd.Series(table_gcam_withdrawal_thermoelectric[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values + table_withdrawal_manufacturing[jj,:] = table_withdrawal_industry[jj,:]-table_withdrawal_thermoelectric[jj,:] + + elif GCAM_nvals==0: + table_withdrawal_data_source[jj] = 'None' + table_withdrawal_industry[jj,:], table_withdrawal_thermoelectric[jj,:], table_withdrawal_manufacturing[jj,:] = 0, 0, 0 + + + #-------------------------------------------------------------------------- + # Plot figure + #-------------------------------------------------------------------------- + + # Initialize figure + f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True) + + # Subpanel 1 + ax1.plot(years,table_aquastat_industry_withdrawal[jj,:],'*',label='FAO industry withdrawal',color=(0.5,0.5,0.5)) + ax1.plot(years,table_gcam_withdrawal_industry[jj,:],'+',label='GCAM industry withdrawal',color=(0,0.5,0)) + ax1.plot(years,table_gcam_withdrawal_industry[jj,:]-table_gcam_withdrawal_thermoelectric[jj,:],'+',label='GCAM manufacturing withdrawal',color=(0,1,0)) + ax1.plot(years,table_withdrawal_industry[jj,:],label='Beck industry withdrawal',color=(0.5,0,0)) + ax1.plot(years,table_withdrawal_manufacturing[jj,:],label='Beck manufacturing withdrawal',color=(1,0,0)) + ax1.legend() + ax1.set_ylabel('Withdrawal (km^3/year)') + ax1.set_title(country_name) + + # Subpanel 2 + ax2.plot(years,table_Huang_withdrawal_industry[jj,:],label='Huang industry withdrawal',color=(0,0,0.5)) + ax2.plot(years,table_Huang_withdrawal_manufacturing[jj,:],label='Huang manufacturing withdrawal',color=(0,0,1)) + ax2.plot(years,table_withdrawal_industry[jj,:],label='Beck industry withdrawal',color=(0.5,0,0)) + ax2.plot(years,table_withdrawal_manufacturing[jj,:],label='Beck manufacturing withdrawal',color=(1,0,0)) + ax2.legend() + ax2.set_ylabel('Withdrawal (km^3/year)') + ax2.set_title(country_name) + + # Subpanel 3 + ax3.plot(years,table_worldbank_gdp[jj,:],label='GDP',color='g') + ax3.plot(years,table_worldbank_mva[jj,:],label='MVA',color='b') + ax3.legend() + ax3.set_ylabel('Constant 2010 US$') + ax3.set_title(country_name) + + # Save figure + f.set_size_inches(10, 10) + plt.savefig(os.path.join(config['output_folder'],'step3a_industrial_demand','figures',str(jj).zfill(3)+'_'+country_name+'.png'),dpi=150) + plt.close() + + # Save to csv + pd.DataFrame(table_withdrawal_data_source,index=country_codes['name'],columns=['Withdrawal data source']).to_csv(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','withdrawal_data_source.csv')) + pd.DataFrame(table_withdrawal_industry,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','withdrawal_industry.csv')) + pd.DataFrame(table_withdrawal_manufacturing,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','withdrawal_manufacturing.csv')) + pd.DataFrame(table_withdrawal_thermoelectric,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','withdrawal_thermoelectric.csv')) + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Compute monthly maps of heating degree days (HDD) and cooling degree + # days (CDD) + ############################################################################ + + print('-------------------------------------------------------------------------------') + + for year in years: + for month in np.arange(1,13): + t0 = time.time() + + outpath_hdd = os.path.join(config['output_folder'],'step3a_industrial_demand','hdd',str(year)+str(month).zfill(2)+'.npz') + outpath_cdd = os.path.join(config['output_folder'],'step3a_industrial_demand','cdd',str(year)+str(month).zfill(2)+'.npz') + + if os.path.isfile(outpath_hdd) & os.path.isfile(outpath_cdd): continue + + print('Computing HDD and CDD for '+str(year)+' month '+str(month)) + + # Load MSWX air temperature data for entire month + ndays = monthrange(year,month)[1] + mswx_month = np.zeros((1800,3600,ndays),dtype=np.single)*np.NaN + for day in np.arange(1,ndays+1): + filepath = os.path.join(config['mswx_folder'],'Past','Temp','Daily',datetime(year,month,day).strftime('%Y%j')+'.nc') + if os.path.isfile(filepath): + dset = Dataset(filepath) + mswx_month[:,:,day-1] = np.squeeze(np.array(dset.variables['air_temperature'])) + + # Compute HDD and CDD + mswx_hdd = np.nansum((18-mswx_month)*(mswx_month<18),axis=2) + mswx_cdd = np.nansum((mswx_month-18)*(mswx_month>=18),axis=2) + + # Save results + np.savez_compressed(outpath_hdd,data=mswx_hdd) + np.savez_compressed(outpath_cdd,data=mswx_cdd) + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + # pdb.set_trace() + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/step3b_industrial_demand.py b/src/lisfloodutilities/water-demand-historic/step3b_industrial_demand.py new file mode 100644 index 0000000..6aa5fa3 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/step3b_industrial_demand.py @@ -0,0 +1,493 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +""" +Copyright 2019-2023 European Union +Licensed under the EUPL, Version 1.2 or as soon they will be approved by the European Commission subsequent versions of the EUPL (the "Licence"); +You may not use this work except in compliance with the Licence. +You may obtain a copy of the Licence at: +https://joinup.ec.europa.eu/sites/default/files/inline-files/EUPL%20v1_2%20EN(1).txt +Unless required by applicable law or agreed to in writing, software distributed under the Licence is distributed on an "AS IS" basis, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the Licence for the specific language governing permissions and limitations under the Licence. + +""" +__author__ = "Hylke E. Beck" +__date__ = "January 2022" + +import os, sys, glob, time, pdb +import pandas as pd +import numpy as np +from netCDF4 import Dataset +import matplotlib.pyplot as plt +from skimage.transform import resize +from tools import * +import rasterio +from calendar import monthrange + +# Load configuration file +config = load_config(sys.argv[1]) + +def main(): + print('===============================================================================') + + # Create output folder + if os.path.isdir(os.path.join(config['output_folder'],'step3b_industrial_demand','figures'))==False: + os.makedirs(os.path.join(config['output_folder'],'step3b_industrial_demand','figures')) + + # Load template map + dset = Dataset(config['templatemap_path']) + template_lat = dset.variables['lat'][:] + template_lon = dset.variables['lon'][:] + template_res = template_lon[1]-template_lon[0] + varname = list(dset.variables.keys())[-1] + template_np = np.array(dset.variables[varname][:]) + + # Determine map sizes + mapsize_global = (np.round(180/template_res).astype(int),np.round(360/template_res).astype(int)) + mapsize_template = template_np.shape + row_upper,col_left = latlon2rowcol(template_lat[0],template_lon[0],template_res,90,-180) + + # Compute area for each grid-cell (includes correction because lat/lon + # values in templates are often not rounded... + xi, yi = np.meshgrid(np.arange(-180+template_res/2,180+template_res/2,template_res), np.arange(90-template_res/2,-90-template_res/2,-template_res)) + if yi.shape[0]>np.round(180/template_res): + yi, xi = yi[:-1,:], xi[:-1,:] + if yi.shape[1]>np.round(360/template_res): + yi, xi = yi[:,:-1], xi[:,:-1] + area_map = (40075*template_res/360)**2*np.cos(np.deg2rad(yi)) + + # List of years + years = np.arange(config['year_start'],config['year_end']+1).astype(int) + + # List of countries and US states + country_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'un_country_codes.csv')) + state_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'us-state-ansi-fips.csv')) + + + ############################################################################ + # Load country border raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling country border raster') + t0 = time.time() + country_code_map = load_country_code_map(os.path.join(config['world_borders_folder'],'TM_WORLD_BORDERS_UN_rasterized.tif'),mapsize_global) + country_code_map = fill(country_code_map) + country_code_map_1800x3600 = resize(country_code_map,(1800,3600),order=0,mode='edge',anti_aliasing=False) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load US states raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling US state border raster') + t0 = time.time() + state_code_map = load_us_state_code_map(os.path.join(config['us_states_folder'],'cb_2018_us_state_500k_rasterized.tif'),mapsize_global) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load and reproject Huang et al. (2018) water demand data (only used for + # comparison) + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and reprojecting Huang et al. (2018) water demand data') + t0 = time.time() + + # Loop over vars + vars = ['elec','mfg'] + Huang_withdrawal = np.zeros((len(vars),360,720,480),dtype=np.single)*np.NaN + for vv in np.arange(len(vars)): + var = vars[vv] + + # Load raw data + dset = Dataset(os.path.join(config['huang_folder'],'withd_'+var+'.nc')) + raw_data = np.array(dset.variables['withd_'+var][:]) + raw_lat = np.array(dset.variables['lat'][:]) + raw_lon = np.array(dset.variables['lon'][:]) + dset.close() + + # Reproject data + rows,cols = latlon2rowcol(raw_lat,raw_lon,0.5,90,-180) + for ii in np.arange(raw_data.shape[0]): + reprojected = np.zeros((360,720),dtype=np.single) + reprojected[rows,cols] = raw_data[ii,:] + Huang_withdrawal[vv,:,:,ii] = reprojected + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load all tables from preceding script + ############################################################################ + + files1 = glob.glob(os.path.join(config['output_folder'],'step2_domestic_demand','tables','*.csv')) + files2 = glob.glob(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','*.csv')) + files = files1+files2 + tables = {} + for ii in np.arange(len(files)): + fname = os.path.basename(files[ii])[:-4] + tables[fname] = pd.read_csv(files[ii],index_col=0).values + + files = glob.glob(os.path.join(config['output_folder'],'step2_domestic_demand','tables','*.csv')) + for ii in np.arange(len(files)): + fname = os.path.basename(files[ii])[:-4] + tables[fname] = pd.read_csv(files[ii],index_col=0).values + + + ############################################################################ + # Downscale manufacturing and thermoelectric withdrawals + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Downscaling manufacturing and thermoelectric withdrawals') + t0 = time.time() + + varnames = ['ind','ene'] + + for varname in varnames: + + # Load country withdrawals from preceding script + if varname=='ene': + country_table = tables['withdrawal_thermoelectric'] + state_table = tables['usgs_thermoelectric_withdrawal'] + if varname=='ind': + country_table = tables['withdrawal_manufacturing'] + state_table = tables['usgs_manufacturing_withdrawal'] + + + #-------------------------------------------------------------------------- + # Initialize netCDF + #-------------------------------------------------------------------------- + + file = os.path.join(config['output_folder'],'step3b_industrial_demand',varname+'.nc') + + if os.path.isfile(file): + os.remove(file) + + ncfile = Dataset(file, 'w', format='NETCDF4') + ncfile.history = 'Created on %s' % datetime.utcnow().strftime('%Y-%m-%d %H:%M') + + ncfile.createDimension('lon', len(template_lon)) + ncfile.createDimension('lat', len(template_lat)) + ncfile.createDimension('time', None) + + ncfile.createVariable('lon', 'f8', ('lon',)) + ncfile.variables['lon'][:] = template_lon + ncfile.variables['lon'].units = 'degrees_east' + ncfile.variables['lon'].long_name = 'longitude' + + ncfile.createVariable('lat', 'f8', ('lat',)) + ncfile.variables['lat'][:] = template_lat + ncfile.variables['lat'].units = 'degrees_north' + ncfile.variables['lat'].long_name = 'latitude' + + ncfile.createVariable('time', 'f8', 'time') + ncfile.variables['time'].units = 'days since 1979-01-02 00:00:00' + ncfile.variables['time'].long_name = 'time' + ncfile.variables['time'].calendar = 'proleptic_gregorian' + + ncfile.createVariable(varname, np.single, ('time', 'lat', 'lon'), zlib=True, chunksizes=(1,200,200,), fill_value=-9999, least_significant_digit=1) + ncfile.variables[varname].units = 'mm/d' + + for ii in np.arange(len(years)): + year = years[ii] + print(varname+' year '+str(year)) + t1 = time.time() + + # Load population data + npz = np.load(os.path.join(config['output_folder'],'step1_population_density',str(year)+'.npz')) + pop_map = npz['data']+10**-6 + + + #-------------------------------------------------------------------------- + # Spatial downscaling + #-------------------------------------------------------------------------- + + # Spatially downscale annual country withdrawals using population data + data_annual_map = np.zeros(mapsize_global,dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + mask = country_code_map==country_code + country_val = country_table[jj,ii] # Country withdrawal estimate from preceding script + data_annual_map[mask] = 10**6*pop_map[mask]*country_val/(np.sum(pop_map[mask]*area_map[mask])) + #np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + + # Spatially downscale annual US state withdrawals using population data + for jj in np.arange(state_codes.shape[0]): + state_code = state_codes.iloc[jj]['st'] + mask = state_code_map==state_code + state_val = pd.Series(state_table[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values[ii] # State withdrawal estimate from preceding script + data_annual_map[mask] = 10**6*pop_map[mask]*state_val/(np.sum(pop_map[mask]*area_map[mask])) + #np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + + # Check if there are too many missing values + nan_percentage = 100*np.sum(np.isnan(data_annual_map))/(data_annual_map.shape[0]*data_annual_map.shape[1]) + assert nan_percentage<2 + + # Replace NaNs with zeros + data_annual_map[np.isnan(data_annual_map)] = 0 + + + #-------------------------------------------------------------------------- + # Temporal downscaling + #-------------------------------------------------------------------------- + + # Compute p coefficients for temporally downscaling thermoelectric (see Huang et al., 2018) + p_b, p_it = np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN + p_h, p_c, p_u = np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + country_name = country_codes.iloc[jj]['name'] + country_acronym = country_codes.iloc[jj]['alpha-3'] + kw = dict(method="linear",fill_value="extrapolate", limit_direction="both") + gcam_elec_building = pd.Series(tables['gcam_elec_building'][jj,:]).interpolate(**kw).values[ii]+10**-6 # +10**-6 to avoid divide by zero + gcam_elec_trans_ind = pd.Series(tables['gcam_elec_trans_ind'][jj,:]).interpolate(**kw).values[ii]+10**-6 + gcam_elec_heating = pd.Series(tables['gcam_elec_heating'][jj,:]).interpolate(**kw).values[ii]+10**-6 + gcam_elec_cooling = pd.Series(tables['gcam_elec_cooling'][jj,:]).interpolate(**kw).values[ii]+10**-6 + gcam_elec_other = pd.Series(tables['gcam_elec_other'][jj,:]).interpolate(**kw).values[ii]+10**-6 + mask = country_code_map==country_code + p_b[mask] = gcam_elec_building/(gcam_elec_building+gcam_elec_trans_ind) + p_it[mask] = gcam_elec_trans_ind/(gcam_elec_building+gcam_elec_trans_ind) + p_h[mask] = gcam_elec_heating/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + p_c[mask] = gcam_elec_cooling/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + p_u[mask] = gcam_elec_other/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + p_b, p_it = fill(p_b), fill(p_it) + p_h, p_c, p_u = fill(p_h), fill(p_c), fill(p_u) + + # Load HDD and CDD data + hdd_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + cdd_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + for month in np.arange(1,13): + npz = np.load(os.path.join(config['output_folder'],'step3a_industrial_demand','hdd',str(year)+str(month).zfill(2)+'.npz')) + hdd_monthly_maps[:,:,month-1] = imresize_mean(npz['data'],mapsize_global)+10**-6 + npz = np.load(os.path.join(config['output_folder'],'step3a_industrial_demand','cdd',str(year)+str(month).zfill(2)+'.npz')) + cdd_monthly_maps[:,:,month-1] = imresize_mean(npz['data'],mapsize_global)+10**-6 + + # Temporally downscale annual withdrawals + data_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + for month in np.arange(1,13): + month_ndays = monthrange(year,month)[1] + if varname=='ene': + + # Compute annual HDD and CDD sums + hdd_annual_sum = np.sum(hdd_monthly_maps,axis=2) + cdd_annual_sum = np.sum(cdd_monthly_maps,axis=2) + + # Temporally downscale withdrawals following Huang et al. (2018) equations 7 to 10 + data_monthly_map = np.zeros(mapsize_global,dtype=np.single)*np.NaN + mask = (hdd_annual_sum<650) & (cdd_annual_sum<450) + data_monthly_map[mask] = month_ndays*data_annual_map[mask]/365.25 + mask = (hdd_annual_sum>=650) & (cdd_annual_sum<450) + data_monthly_map[mask] = data_annual_map[mask]*(p_b[mask]*((p_h[mask]+p_c[mask])*hdd_monthly_maps[:,:,month-1][mask]/hdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + mask = (hdd_annual_sum<650) & (cdd_annual_sum>=450) + data_monthly_map[mask] = data_annual_map[mask]*(p_b[mask]*((p_h[mask]+p_c[mask])*cdd_monthly_maps[:,:,month-1][mask]/cdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + mask = (hdd_annual_sum>=650) & (cdd_annual_sum>=450) + data_monthly_map[mask] = data_annual_map[mask]*(p_b[mask]*(p_h[mask]*hdd_monthly_maps[:,:,month-1][mask]/hdd_annual_sum[mask]+p_c[mask]*cdd_monthly_maps[:,:,month-1][mask]/cdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + data_monthly_map[np.isnan(data_monthly_map)] = 0 + data_monthly_maps[:,:,month-1] = data_monthly_map + + if varname=='ind': + data_monthly_maps[:,:,month-1] = month_ndays*data_annual_map/365.25 + + + #-------------------------------------------------------------------------- + # Plot figure + #-------------------------------------------------------------------------- + + # Initialize figure + f, (ax1, ax2) = plt.subplots(2, 1, sharex=True) + + # Subpanel 1 + ax1.imshow(np.sqrt(imresize_mean(data_annual_map,(360,720))),vmin=0,vmax=15,cmap=plt.get_cmap('YlGnBu')) + ax1.set_title('Beck et al. (2022) withdrawal (mm/month)') + + # Subpanel 2 + try: + if varname=='ene': varindex = 0 + if varname=='ind': varindex = 1 + timeindex = year-1971 + ax2.imshow(np.sqrt(np.sum(Huang_withdrawal[varindex,:,:,np.arange(timeindex*12,timeindex*12+12)],axis=0)),vmin=0,vmax=15,cmap=plt.get_cmap('YlGnBu')) + ax2.set_title('Huang et al. (2018) withdrawal (mm/month)') + except: + pass + + # Save figure + f.set_size_inches(10, 10) + plt.savefig(os.path.join(config['output_folder'],'step3b_industrial_demand','figures',varname+'_'+str(year)+'.png'),dpi=150) + plt.close() + + + #-------------------------------------------------------------------------- + # Save to netCDF + #-------------------------------------------------------------------------- + + for month in np.arange(1,13): + month_ndays = monthrange(year,month)[1] + data = data_monthly_maps[:,:,month-1]/month_ndays + data = np.round(data,decimals=2) + data = data[row_upper:row_upper+len(template_lat),col_left:col_left+len(template_lon)] # Subset to template map area + index = (year-config['year_start'])*12+month-1 + ncfile.variables['time'][index] = (pd.to_datetime(datetime(year,month,1))-pd.to_datetime(datetime(1979, 1, 1))).total_seconds()/86400 + ncfile.variables[varname][index,:,:] = data + + print("Time elapsed is "+str(time.time()-t1)+" sec") + + ncfile.close() + + # test: reverse computation + # #file = os.path.join(config['output_folder'],'step3b_industrial_demand',varname+'.nc') + # file = os.path.join('/media/sf_VMSharedFolder/lf_utilities_waterdemand_test/waterdemand_GloFAS_hylke',varname+'.nc') + + + # if os.path.isfile(file): + # dset = Dataset(file,'r') + # data = np.array(dset.variables[varname][0:12,:,:]) + # dset.close() + + # year=1979 + # ii=0 + + # data_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + # for month in np.arange(1,13): + # month_ndays = monthrange(year,month)[1] + # data_monthly_maps[:,:,month-1] = data[month-1,:,:]*month_ndays + + # #-------------------------------------------------------------------------- + # # Temporal downscaling + # #-------------------------------------------------------------------------- + + # # Compute p coefficients for temporally downscaling thermoelectric (see Huang et al., 2018) + # p_b, p_it = np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN + # p_h, p_c, p_u = np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN,np.zeros(mapsize_global,dtype=np.single)*np.NaN + # for jj in np.arange(country_codes.shape[0]): + # country_code = country_codes.iloc[jj]['country-code'] + # country_name = country_codes.iloc[jj]['name'] + # country_acronym = country_codes.iloc[jj]['alpha-3'] + # kw = dict(method="linear",fill_value="extrapolate", limit_direction="both") + # gcam_elec_building = pd.Series(tables['gcam_elec_building'][jj,:]).interpolate(**kw).values[ii]+10**-6 # +10**-6 to avoid divide by zero + # gcam_elec_trans_ind = pd.Series(tables['gcam_elec_trans_ind'][jj,:]).interpolate(**kw).values[ii]+10**-6 + # gcam_elec_heating = pd.Series(tables['gcam_elec_heating'][jj,:]).interpolate(**kw).values[ii]+10**-6 + # gcam_elec_cooling = pd.Series(tables['gcam_elec_cooling'][jj,:]).interpolate(**kw).values[ii]+10**-6 + # gcam_elec_other = pd.Series(tables['gcam_elec_other'][jj,:]).interpolate(**kw).values[ii]+10**-6 + # mask = country_code_map==country_code + # p_b[mask] = gcam_elec_building/(gcam_elec_building+gcam_elec_trans_ind) + # p_it[mask] = gcam_elec_trans_ind/(gcam_elec_building+gcam_elec_trans_ind) + # p_h[mask] = gcam_elec_heating/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + # p_c[mask] = gcam_elec_cooling/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + # p_u[mask] = gcam_elec_other/(gcam_elec_heating+gcam_elec_cooling+gcam_elec_other) + # p_b, p_it = fill(p_b), fill(p_it) + # p_h, p_c, p_u = fill(p_h), fill(p_c), fill(p_u) + + # # Load HDD and CDD data + # hdd_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + # cdd_monthly_maps = np.zeros((mapsize_global[0],mapsize_global[1],12),dtype=np.single)*np.NaN + # for month in np.arange(1,13): + # npz = np.load(os.path.join(config['output_folder'],'step3a_industrial_demand','hdd',str(year)+str(month).zfill(2)+'.npz')) + # hdd_monthly_maps[:,:,month-1] = imresize_mean(npz['data'],mapsize_global)+10**-6 + # npz = np.load(os.path.join(config['output_folder'],'step3a_industrial_demand','cdd',str(year)+str(month).zfill(2)+'.npz')) + # cdd_monthly_maps[:,:,month-1] = imresize_mean(npz['data'],mapsize_global)+10**-6 + + # # Temporally downscale annual withdrawals + # data_annual_map = np.zeros(mapsize_global,dtype=np.single)*np.NaN + # for month in np.arange(1,13): + # month_ndays = monthrange(year,month)[1] + + # # Compute annual HDD and CDD sums + # hdd_annual_sum = np.sum(hdd_monthly_maps,axis=2) + # cdd_annual_sum = np.sum(cdd_monthly_maps,axis=2) + + # # Temporally downscale withdrawals following Huang et al. (2018) equations 7 to 10 + # data_monthly_map = data_monthly_maps[:,:,month-1] + # mask = (hdd_annual_sum<650) & (cdd_annual_sum<450) + # data_annual_map[mask] = data_monthly_map[mask]*365.25/month_ndays + # mask = (hdd_annual_sum>=650) & (cdd_annual_sum<450) + # data_annual_map[mask] = data_monthly_map[mask]/(p_b[mask]*((p_h[mask]+p_c[mask])*hdd_monthly_maps[:,:,month-1][mask]/hdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + # mask = (hdd_annual_sum<650) & (cdd_annual_sum>=450) + # data_annual_map[mask] = data_monthly_map[mask]/(p_b[mask]*((p_h[mask]+p_c[mask])*cdd_monthly_maps[:,:,month-1][mask]/cdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + # mask = (hdd_annual_sum>=650) & (cdd_annual_sum>=450) + # data_annual_map[mask] = data_monthly_map[mask]/(p_b[mask]*(p_h[mask]*hdd_monthly_maps[:,:,month-1][mask]/hdd_annual_sum[mask]+p_c[mask]*cdd_monthly_maps[:,:,month-1][mask]/cdd_annual_sum[mask]+p_u[mask]*month_ndays/365.25)+p_it[mask]*month_ndays/365.25) + + # # Load population data + # npz = np.load(os.path.join(config['output_folder'],'step1_population_density',str(year)+'.npz')) + # pop_map = npz['data']+10**-6 + + + # #-------------------------------------------------------------------------- + # # Spatial downscaling + # #-------------------------------------------------------------------------- + + # # Spatially downscale annual country withdrawals using population data + # for jj in np.arange(country_codes.shape[0]): + # country_code = country_codes.iloc[jj]['country-code'] + # mask = country_code_map==country_code + # country_val = np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + # country_table[jj,ii] = country_val # Country withdrawal estimate from preceding script + + # pd.DataFrame(country_table[:,0],index=country_codes['name'],columns=years).to_csv("testCountries.csv") + + # sum_state_val=0 + # us_mask=np.zeros(mapsize_global,dtype=np.bool) + # us_mask=False + # # Spatially downscale annual US state withdrawals using population data + # for jj in np.arange(state_codes.shape[0]): + # state_code = state_codes.iloc[jj]['st'] + # mask = state_code_map==state_code + # us_mask = np.logical_or(us_mask,mask) + # state_val = np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + # sum_state_val+=state_val + # state_table[jj,ii] = state_val + + # pd.DataFrame(state_table[:,0],index=state_codes['stname'],columns=years).to_csv("testStates.csv") + + # print("End test, time is "+str(time.time()-t0)+" sec") + + + + print("Total time elapsed is "+str(time.time()-t0)+" sec") + + + ''' + ############################################################################ + # Load HTAP data + ############################################################################ + + dset = Dataset(os.path.join(config['htap_folder'],'htapv2.2.emisso2.surface.x3600y1800t12.2016.integrate.nc4')) + raw = np.array(dset.variables['sanl1'][:]) + mean = np.flipud(np.nanmean(raw,axis=0)*10**9) + lat = np.array(dset.variables['lat'][:])[::-1] + lon = np.array(dset.variables['lon'][:]) + dset.close() + + file = os.path.join(config['htap_folder'],'htapv2.2.emisso2.surface.x3600y1800t12.2016.integrate_mean_hylke.nc') + + ncfile = Dataset(file, 'w', format='NETCDF4') + ncfile.history = 'Created on %s' % datetime.utcnow().strftime('%Y-%m-%d %H:%M') + + ncfile.createDimension('lon', len(lon)) + ncfile.createDimension('lat', len(lat)) + + ncfile.createVariable('lon', 'f8', ('lon',)) + ncfile.variables['lon'][:] = lon + ncfile.variables['lon'].units = 'degrees_east' + ncfile.variables['lon'].long_name = 'longitude' + + ncfile.createVariable('lat', 'f8', ('lat',)) + ncfile.variables['lat'].units = 'degrees_north' + ncfile.variables['lat'][:] = lat + ncfile.variables['lat'].long_name = 'latitude' + + ncfile.createVariable(varname, np.single, ('lat', 'lon'), zlib=True, chunksizes=(32,32,), fill_value=-9999) #, least_significant_digit=9 + + ncfile.variables[varname][:,:] = mean + + ncfile.close() + + ''' + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/step4_livestock_demand.py b/src/lisfloodutilities/water-demand-historic/step4_livestock_demand.py new file mode 100644 index 0000000..03adfd2 --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/step4_livestock_demand.py @@ -0,0 +1,494 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +""" +Copyright 2019-2023 European Union +Licensed under the EUPL, Version 1.2 or as soon they will be approved by the European Commission subsequent versions of the EUPL (the "Licence"); +You may not use this work except in compliance with the Licence. +You may obtain a copy of the Licence at: +https://joinup.ec.europa.eu/sites/default/files/inline-files/EUPL%20v1_2%20EN(1).txt +Unless required by applicable law or agreed to in writing, software distributed under the Licence is distributed on an "AS IS" basis, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the Licence for the specific language governing permissions and limitations under the Licence. + +""" +__author__ = "Hylke E. Beck" +__date__ = "January 2022" + +import os, sys, glob, time, pdb +import pandas as pd +import numpy as np +from netCDF4 import Dataset +import matplotlib.pyplot as plt +from skimage.transform import resize +from tools import * +import rasterio +from calendar import monthrange +from scipy import stats + +# Load configuration file +config = load_config(sys.argv[1]) + +def main(): + print('===============================================================================') + + # Create output folders + if os.path.isdir(os.path.join(config['output_folder'],'step4_livestock_demand','tables'))==False: + os.makedirs(os.path.join(config['output_folder'],'step4_livestock_demand','tables')) + if os.path.isdir(os.path.join(config['output_folder'],'step4_livestock_demand','figures'))==False: + os.makedirs(os.path.join(config['output_folder'],'step4_livestock_demand','figures')) + + # Load template map + dset = Dataset(config['templatemap_path']) + template_lat = dset.variables['lat'][:] + template_lon = dset.variables['lon'][:] + template_res = template_lon[1]-template_lon[0] + varname = list(dset.variables.keys())[-1] + template_np = np.array(dset.variables[varname][:]) + + # Determine map sizes + mapsize_global = (np.round(180/template_res).astype(int),np.round(360/template_res).astype(int)) + mapsize_template = template_np.shape + row_upper,col_left = latlon2rowcol(template_lat[0],template_lon[0],template_res,90,-180) + + # Compute area for each grid-cell (includes correction because lat/lon + # values in templates are often not rounded... + xi, yi = np.meshgrid(np.arange(-180+template_res/2,180+template_res/2,template_res), np.arange(90-template_res/2,-90-template_res/2,-template_res)) + if yi.shape[0]>np.round(180/template_res): + yi, xi = yi[:-1,:], xi[:-1,:] + if yi.shape[1]>np.round(360/template_res): + yi, xi = yi[:,:-1], xi[:,:-1] + area_map = (40075*template_res/360)**2*np.cos(np.deg2rad(yi)) + + # List of years + years = np.arange(config['year_start'],config['year_end']+1).astype(int) + + # List of countries and US states + country_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'un_country_codes.csv')) + state_codes = pd.read_csv(os.path.join(config['ancillary_data_folder'],'us-state-ansi-fips.csv')) + + # Load USGS livestock withdrawal estimates + table_usgs_livestock_withdrawal = pd.read_csv(os.path.join(config['output_folder'],'step2_domestic_demand','tables','usgs_livestock_withdrawal.csv'),index_col=0).values + + + ############################################################################ + # Load country border raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling country border raster') + t0 = time.time() + country_code_map = load_country_code_map(os.path.join(config['world_borders_folder'],'TM_WORLD_BORDERS_UN_rasterized.tif'),mapsize_global) + country_code_map = fill(country_code_map) + country_code_map_1800x3600 = resize(country_code_map,(1800,3600),order=0,mode='edge',anti_aliasing=False) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load US states raster + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and resampling US state border raster') + t0 = time.time() + state_code_map = load_us_state_code_map(os.path.join(config['us_states_folder'],'cb_2018_us_state_500k_rasterized.tif'),mapsize_global) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load AQUASTAT livestock withdrawal data (calculated from difference + # between agriculture and irrigation) + ############################################################################ + + aquastat = pd.read_csv(os.path.join(config['aquastat_folder'],'aquastat_clean.csv'),index_col=False) + + table_aquastat_irrigation_withdrawal = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + table_aquastat_agriculture_withdrawal = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + for ii in np.arange(len(years)): + sel_ir = (aquastat['Area Id']==country_code) & (aquastat['Variable Name']=='Irrigation water withdrawal') & (aquastat['Year']==years[ii]) + if np.sum(sel_ir)>0: + table_aquastat_irrigation_withdrawal[jj,ii] = aquastat['Value'][sel_ir].values # km3/year + sel_ag = (aquastat['Area Id']==country_code) & (aquastat['Variable Name']=='Agricultural water withdrawal') & (aquastat['Year']==years[ii]) + if np.sum(sel_ag)>0: + table_aquastat_agriculture_withdrawal[jj,ii] = aquastat['Value'][sel_ag].values # km3/year + + # Compute livestock withdrawal as difference between agriculture and irrigation + table_aquastat_withdrawal_livestock = table_aquastat_agriculture_withdrawal-table_aquastat_irrigation_withdrawal + table_aquastat_withdrawal_livestock[table_aquastat_withdrawal_livestock<0] = np.NaN + + # Save to csv + pd.DataFrame(table_aquastat_irrigation_withdrawal,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','aquastat_irrigation_withdrawal.csv')) + pd.DataFrame(table_aquastat_agriculture_withdrawal,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','aquastat_agriculture_withdrawal.csv')) + pd.DataFrame(table_aquastat_withdrawal_livestock,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','aquastat_livestock_withdrawal.csv')) + + + ############################################################################ + # Load Gridded Livestock of the World (GLW) 3 data and estimate the total + # mass + ############################################################################ + + # Table with GLW animal types and average weight in kg (rough guesses based + # on quick Google searches) + glw_table = np.array([\ + ['Bf','Buffaloes',700],\ + ['Ch','Chicken',3],\ + ['Ct','Cattle',630],\ + ['Dk','Ducks',1.5],\ + ['Gt','Goats',30],\ + ['Ho','Horses',500],\ + ['Pg','Pigs',100],\ + ['Sh','Sheep',75],\ + ],dtype=object) + + # Load data for each species + glw_mass_maps = np.zeros((mapsize_global[0],mapsize_global[1],len(glw_table)),dtype=np.single)*np.NaN + for ii in np.arange(len(glw_table)): + src = rasterio.open(os.path.join(config['glw_folder'],'5_'+glw_table[ii][0]+'_2010_Da.tif')) + data = np.array(src.read(1).astype(np.single)) + src.close() + data[data==-np.Inf] = np.NaN + data[np.isnan(data)] = 0 + mapsize_native = data.shape + data = resize(data,mapsize_global)*mapsize_native[0]/mapsize_global[0] # Number of animals per grid-cell + glw_mass_maps[:,:,ii] = data*glw_table[ii][2] # kg per grid-cell + #print(glw_table[ii][1]+' total mass '+str(np.round(np.mean(glw_mass_maps[:,:,ii])))+' kg') + + # Compute total mass + glw_mass_map = np.sum(glw_mass_maps,axis=2) # kg per grid-cell + + + ############################################################################ + # Compute total livestock mass for each country + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Computating GLW total livestock mass for each country') + t0 = time.time() + table_glw_livestock_mass = np.zeros((country_codes.shape[0],),dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + mask = country_code_map==country_code + table_glw_livestock_mass[jj] = np.sum(glw_mass_map[mask])+1 # +1 to avoid divide by zero + pd.DataFrame(table_glw_livestock_mass,index=country_codes['name'],columns=['Livestock mass [kg]']).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','glw_livestock_mass.csv')) + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Load GCAM withdrawals and rescale for each country using GLW total + # livestock mass estimates + ############################################################################ + + gcam_output = pd.read_csv(os.path.join(config['gcam_folder'],'water_demand.csv')) + + gcam_regions = np.array([\ + [['Africa_Eastern'],['Burundi', 'Comoros', 'Djibouti', 'Eritrea', 'Ethiopia', 'Kenya', 'Madagascar', 'Mauritius', 'Réunion', 'Rwanda', 'Sudan', 'Somalia', 'Uganda','South Sudan']],\ + [['Africa_Northern'],['Algeria', 'Egypt', 'Western Sahara', 'Libya', 'Morocco', 'Tunisia']],\ + [['Africa_Southern'],['Angola', 'Botswana', 'Lesotho', 'Mozambique', 'Malawi', 'Namibia', 'Eswatini', 'Tanzania, United Republic of', 'Zambia', 'Zimbabwe']],\ + [['Africa_Western'],['Benin', 'Burkina Faso', 'Central African Republic', "Côte d'Ivoire", 'Cameroon', 'Congo, Democratic Republic of the', 'Congo', 'Cabo Verde', 'Gabon', 'Ghana', 'Guinea', 'Gambia', 'Guinea-Bissau', 'Equatorial Guinea', 'Liberia', 'Mali', 'Mauritania', 'Niger', 'Nigeria', 'Senegal', 'Sierra Leone', 'Sao Tome and Principe', 'Chad', 'Togo']],\ + [['Argentina'],['Argentina']],\ + [['Australia_NZ'],['Australia', 'New Zealand']],\ + [['Brazil'],['Brazil']],\ + [['Canada'],['Canada']],\ + [['Central America and the Caribbean'],['Aruba', 'Anguilla', 'Aruba', 'Sint Maarten (Dutch part)', 'Curaçao', 'Bonaire, Sint Eustatius and Saba', 'Antigua and Barbuda', 'Bahamas', 'Belize', 'Bermuda', 'Barbados', 'Costa Rica', 'Cuba', 'Cayman Islands', 'Dominica', 'Dominican Republic', 'Guadeloupe', 'Grenada', 'Guatemala', 'Honduras', 'Haiti', 'Jamaica', 'Saint Kitts and Nevis', 'Saint Lucia', 'Montserrat', 'Martinique', 'Nicaragua', 'Panama', 'El Salvador', 'Trinidad and Tobago', 'Saint Vincent and the Grenadines', 'Puerto Rico', 'Virgin Islands (U.S.)', 'Virgin Islands (British)']],\ + [['Central Asia'],['Armenia', 'Azerbaijan', 'Georgia', 'Kazakhstan', 'Kyrgyzstan', 'Mongolia', 'Tajikistan', 'Turkmenistan', 'Uzbekistan']],\ + [['China'],['China', 'Hong Kong', 'Macao', 'Taiwan, Province of China']],\ + [['Colombia'],['Colombia']],\ + [['EU-12'],['Bulgaria', 'Cyprus', 'Czechia', 'Estonia', 'Hungary', 'Lithuania', 'Latvia', 'Malta', 'Poland', 'Romania', 'Slovakia', 'Slovenia']],\ + [['EU-15'],['Andorra', 'Austria', 'Belgium', 'Denmark', 'Finland', 'France', 'Germany', 'Greece', 'Greenland', 'Ireland', 'Italy', 'Luxembourg', 'Monaco', 'Netherlands', 'Portugal', 'Sweden', 'Spain', 'United Kingdom of Great Britain and Northern Ireland','San Marino']],\ + [['Europe_Eastern'],['Belarus', 'Moldova, Republic of', 'Ukraine']],\ + [['European Free Trade Association'],['Iceland', 'Norway', 'Switzerland']],\ + [['Europe_Non_EU'],['Albania', 'Bosnia and Herzegovina', 'Croatia', 'North Macedonia', 'Montenegro', 'Serbia', 'Turkey']],\ + [['India'],['India']],\ + [['Indonesia'],['Indonesia']],\ + [['Japan'],['Japan']],\ + [['Mexico'],['Mexico']],\ + [['Middle East'],['United Arab Emirates', 'Bahrain', 'Iran (Islamic Republic of)', 'Iraq', 'Israel', 'Jordan', 'Kuwait', 'Lebanon', 'Oman', 'Palestine, State of', 'Qatar', 'Saudi Arabia', 'Syrian Arab Republic', 'Yemen']],\ + [['Pakistan'],['Pakistan']],\ + [['Russia'],['Russian Federation']],\ + [['South Africa'],['South Africa']],\ + [['South America_Northern'],['French Guiana', 'Guyana', 'Suriname', 'Venezuela (Bolivarian Republic of)']],\ + [['South America_Southern'],['Bolivia (Plurinational State of)', 'Chile', 'Ecuador', 'Peru', 'Paraguay', 'Uruguay']],\ + [['South Asia'],['Afghanistan', 'Bangladesh', 'Bhutan', 'Sri Lanka', 'Maldives', 'Nepal']],\ + [['Southeast Asia'],['American Samoa', 'Brunei Darussalam', 'Cocos (Keeling) Islands', 'Cook Islands', 'Christmas Island', 'Fiji', 'Micronesia (Federated States of)', 'Guam', 'Cambodia', 'Kiribati', "Lao People's Democratic Republic", 'Marshall Islands', 'Myanmar', 'Northern Mariana Islands', 'Malaysia', 'Mayotte', 'New Caledonia', 'Norfolk Island', 'Niue', 'Nauru', 'Northern Mariana Islands','Palau','Guam','Northern Mariana Islands', 'Pitcairn', 'Philippines', 'Palau', 'Papua New Guinea', "Korea (Democratic People's Republic of)", 'French Polynesia', 'Singapore', 'Solomon Islands', 'Seychelles', 'Thailand', 'Tokelau', 'Timor-Leste', 'Tonga', 'Tuvalu', 'Viet Nam', 'Vanuatu', 'Samoa']],\ + [['South Korea'],['Korea, Republic of']],\ + [['Taiwan'],['']],\ + [['USA'],['United States of America']],\ + ],dtype=object) + + # Values for each country and year + table_gcam_glw_withdrawal_livestock = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_name = country_codes.iloc[jj]['name'] + + # GCAM withdrawals represent regional totals. To obtain country estimates, + # we need to know the livestock mass of GCAM region and country + + # Which region are we in? + region_ind = -1 + for kk in np.arange(len(gcam_regions)): + if country_name in gcam_regions[kk][1]: + region_ind = kk + break + if region_ind==-1: continue + region_name = gcam_regions[region_ind][0][0] + + # What is the total livestock mass of the region? + region_mass = 0 + region_countries = gcam_regions[region_ind][1] + for region_country in region_countries: + hh = country_codes['name'].tolist().index(region_country) + region_mass += table_glw_livestock_mass[hh] + + # Aggregate different sectors + sel = ["Reference" in s for s in gcam_output['scenario']] &\ + (gcam_output['region']==region_name) &\ + (gcam_output['sector'].isin(['SheepGoat','Beef','Dairy','Pork','Poultry'])) + gcam_livestock_withdrawal = np.sum(gcam_output.iloc[np.where(sel)[0],3:-1].values,axis=0) # km3/year + gcam_years = np.array([int(x) for x in gcam_output.columns[3:-1].values]) + + # Rescale regional withdrawals based on country population + mass_frac = table_glw_livestock_mass[jj]/region_mass + for kk in np.arange(len(gcam_years)): + try: + ii = years.tolist().index(gcam_years[kk]) + except: + continue + table_gcam_glw_withdrawal_livestock[jj,ii] = gcam_livestock_withdrawal[kk]*mass_frac + + # Save to csv + pd.DataFrame(table_gcam_glw_withdrawal_livestock,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','gcam_glw_withdrawal_livestock.csv')) + + + ############################################################################ + # Verification of estimates. Huang et al. (2018) is supposed to represent + # difference between AQUASTAT agriculture and irrigation, which is the + # same approach as we use, but there are large differences. + ############################################################################ + + # Load Huang et al. (2018) livestock estimates + table_Huang_withdrawal_livestock = pd.read_csv(os.path.join(config['output_folder'],'step3a_industrial_demand','tables','Huang_withdrawal_livestock.csv'),index_col=0).values + + # What are the estimates around 2010 for each data source? + ii_2010 = years.tolist().index(2010) + aquastat = table_aquastat_withdrawal_livestock[:,ii_2010+2] + gcam = table_gcam_glw_withdrawal_livestock[:,ii_2010] + huang = table_Huang_withdrawal_livestock[:,ii_2010] + sel = np.isnan(aquastat+gcam+huang)==False + + # Print estimates for a few major countries + for jj in np.arange(country_codes.shape[0]): + country_name = country_codes.iloc[jj]['name'] + if aquastat[jj]>0.2: + print('-------------------------------------------------------------------------------') + print(country_name) + print('aquastat '+str(np.round(aquastat[jj],2))) + print('gcam '+str(np.round(gcam[jj],2))) + print('huang '+str(np.round(huang[jj],2))) + + # How are the estimates correlated? + stats.spearmanr(aquastat[sel], gcam[sel]) + stats.spearmanr(aquastat[sel], huang[sel]) + stats.spearmanr(gcam[sel], huang[sel]) + + # What do the different distributions look like? + plt.figure(figsize=(8,6)) + plt.hist(aquastat[sel], bins=300, alpha=0.5, range=(0,15), label="aquastat") + plt.hist(gcam[sel], bins=300, alpha=0.5, range=(0,15), label="gcam") + plt.hist(huang[sel], bins=300, alpha=0.5, range=(0,15), label="huang") + plt.legend() + plt.show(block=False) + + # Can we derive a simple correction factor to improve the GCAM+GLW estimates? Not really because mean and median give opposite results... + np.mean(aquastat[sel])/np.mean(gcam[sel]) + np.median(aquastat[sel])/np.median(gcam[sel]) + + + ############################################################################ + # Load and reproject Huang et al. (2018) water demand data (only used for + # comparison) + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Loading and reprojecting Huang et al. (2018) water demand data') + t0 = time.time() + + # Loop over vars + vars = ['liv'] + Huang_withdrawal = np.zeros((len(vars),360,720,480),dtype=np.single)*np.NaN + for vv in np.arange(len(vars)): + var = vars[vv] + + # Load raw data + dset = Dataset(os.path.join(config['huang_folder'],'withd_'+var+'.nc')) + raw_data = np.array(dset.variables['withd_'+var][:]) + raw_lat = np.array(dset.variables['lat'][:]) + raw_lon = np.array(dset.variables['lon'][:]) + dset.close() + + # Reproject data + rows,cols = latlon2rowcol(raw_lat,raw_lon,0.5,90,-180) + for ii in np.arange(raw_data.shape[0]): + reprojected = np.zeros((360,720),dtype=np.single) + reprojected[rows,cols] = raw_data[ii,:] + Huang_withdrawal[vv,:,:,ii] = reprojected + + print("Time elapsed is "+str(time.time()-t0)+" sec") + + + ############################################################################ + # Loop through countries and compute final livestock withdrawal time + # series. We will use AQUASTAT just for rescaling, not to derive time + # series, due to the uncertainty in the AQUASTAT estimates (which + # represent the difference between agriculture and irrigation estimates). + ############################################################################ + + table_livestock_industry = np.zeros((country_codes.shape[0],len(years)),dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + country_name = country_codes.iloc[jj]['name'] + country_acronym = country_codes.iloc[jj]['alpha-3'] + print('jj='+str(jj)+' '+country_name) + + # Linearly interpolate and nearest-neighbor extrapolate GCAM+GLW time series + ts_gcam = pd.Series(table_gcam_glw_withdrawal_livestock[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values + + # If AQUASTAT estimate is available, rescale GCAM+GLW time series (using median + # rescaling factor to reduce uncertainty) + factors = (table_aquastat_withdrawal_livestock[jj,:]+10**-6)/(ts_gcam+10**-6) + factor = np.nanmedian(factors) + if np.isnan(factor)==False: + ts_gcam = ts_gcam*factor + + # If AQUASTAT estimate is not available, use GCAM+GLW estimate + table_livestock_industry[jj,:] = ts_gcam + + pd.DataFrame(table_livestock_industry,index=country_codes['name'],columns=years).to_csv(os.path.join(config['output_folder'],'step4_livestock_demand','tables','livestock_industry.csv')) + + + ############################################################################ + # Downscale livestock withdrawals + ############################################################################ + + print('-------------------------------------------------------------------------------') + print('Downscaling livestock withdrawals') + t0 = time.time() + + varname = 'liv' + + #-------------------------------------------------------------------------- + # Initialize netCDF + #-------------------------------------------------------------------------- + + file = os.path.join(config['output_folder'],'step4_livestock_demand',varname+'.nc') + + if os.path.isfile(file): + os.remove(file) + + ncfile = Dataset(file, 'w', format='NETCDF4') + ncfile.history = 'Created on %s' % datetime.utcnow().strftime('%Y-%m-%d %H:%M') + + ncfile.createDimension('lon', len(template_lon)) + ncfile.createDimension('lat', len(template_lat)) + ncfile.createDimension('time', None) + + ncfile.createVariable('lon', 'f8', ('lon',)) + ncfile.variables['lon'][:] = template_lon + ncfile.variables['lon'].units = 'degrees_east' + ncfile.variables['lon'].long_name = 'longitude' + + ncfile.createVariable('lat', 'f8', ('lat',)) + ncfile.variables['lat'][:] = template_lat + ncfile.variables['lat'].units = 'degrees_north' + ncfile.variables['lat'].long_name = 'latitude' + + ncfile.createVariable('time', 'f8', 'time') + ncfile.variables['time'].units = 'days since 1979-01-02 00:00:00' + ncfile.variables['time'].long_name = 'time' + ncfile.variables['time'].calendar = 'proleptic_gregorian' + + ncfile.createVariable(varname, np.single, ('time', 'lat', 'lon'), zlib=True, chunksizes=(1,200,200,), fill_value=-9999, least_significant_digit=1) + ncfile.variables[varname].units = 'mm/d' + + for ii in np.arange(len(years)): + year = years[ii] + print(varname+' year '+str(year)) + t1 = time.time() + + + #-------------------------------------------------------------------------- + # Spatial downscaling + #-------------------------------------------------------------------------- + + # Spatially downscale annual country withdrawals using GLW total livestock mass grid + data_annual_map = np.zeros(mapsize_global,dtype=np.single)*np.NaN + for jj in np.arange(country_codes.shape[0]): + country_code = country_codes.iloc[jj]['country-code'] + mask = country_code_map==country_code + country_val = table_livestock_industry[jj,ii] # Country withdrawal estimate + data_annual_map[mask] = 10**6*glw_mass_map[mask]*country_val/np.sum(glw_mass_map[mask]*area_map[mask]) + #np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + + # Spatially downscale annual US state withdrawals using population data + for jj in np.arange(state_codes.shape[0]): + state_code = state_codes.iloc[jj]['st'] + mask = state_code_map==state_code + state_val = pd.Series(table_usgs_livestock_withdrawal[jj,:]).interpolate(method="linear",fill_value="extrapolate", limit_direction="both").values[ii] # State withdrawal estimate from preceding script + data_annual_map[mask] = 10**6*glw_mass_map[mask]*state_val/np.sum(glw_mass_map[mask]*area_map[mask]) + #np.sum(area_map[mask]*data_annual_map[mask]/1000/1000) + + # Check if there are too many missing values + nan_percentage = 100*np.sum(np.isnan(data_annual_map))/(data_annual_map.shape[0]*data_annual_map.shape[1]) + assert nan_percentage<37 + + # Replace NaNs with zeros + data_annual_map[np.isnan(data_annual_map)] = 0 + + + #-------------------------------------------------------------------------- + # Plot figure + #-------------------------------------------------------------------------- + + # Initialize figure + f, (ax1, ax2) = plt.subplots(2, 1, sharex=True) + + # Subpanel 1 + ax1.imshow(np.sqrt(imresize_mean(data_annual_map,(360,720))),vmin=0,vmax=15,cmap=plt.get_cmap('YlGnBu')) + ax1.set_title('Beck et al. (2022) withdrawal (mm/month)') + + # Subpanel 2 + try: + varindex = 0 + timeindex = year-1971 + ax2.imshow(np.sqrt(np.sum(Huang_withdrawal[varindex,:,:,np.arange(timeindex*12,timeindex*12+12)],axis=0)),vmin=0,vmax=15,cmap=plt.get_cmap('YlGnBu')) + ax2.set_title('Huang et al. (2018) withdrawal (mm/month)') + except: + pass + + # Save figure + f.set_size_inches(10, 10) + plt.savefig(os.path.join(config['output_folder'],'step4_livestock_demand','figures',varname+'_'+str(year)+'.png'),dpi=150) + plt.close() + + + #-------------------------------------------------------------------------- + # Save to netCDF + #-------------------------------------------------------------------------- + + for month in np.arange(1,13): + month_ndays = monthrange(year,month)[1] + data = data_annual_map/365.25 + data = np.round(data,decimals=2) + data = data[row_upper:row_upper+len(template_lat),col_left:col_left+len(template_lon)] # Subset to template map area + index = (year-config['year_start'])*12+month-1 + ncfile.variables['time'][index] = (pd.to_datetime(datetime(year,month,1))-pd.to_datetime(datetime(1979, 1, 1))).total_seconds()/86400 + ncfile.variables[varname][index,:,:] = data + + print("Time elapsed is "+str(time.time()-t1)+" sec") + + ncfile.close() + + print("Total time elapsed is "+str(time.time()-t0)+" sec") + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/src/lisfloodutilities/water-demand-historic/tools.py b/src/lisfloodutilities/water-demand-historic/tools.py new file mode 100644 index 0000000..f3ae8fc --- /dev/null +++ b/src/lisfloodutilities/water-demand-historic/tools.py @@ -0,0 +1,108 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +""" +Copyright 2019-2023 European Union +Licensed under the EUPL, Version 1.2 or as soon they will be approved by the European Commission subsequent versions of the EUPL (the "Licence"); +You may not use this work except in compliance with the Licence. +You may obtain a copy of the Licence at: +https://joinup.ec.europa.eu/sites/default/files/inline-files/EUPL%20v1_2%20EN(1).txt +Unless required by applicable law or agreed to in writing, software distributed under the Licence is distributed on an "AS IS" basis, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the Licence for the specific language governing permissions and limitations under the Licence. + +""" +__author__ = "Hylke E. Beck" +__date__ = "January 2022" + +import os, sys, glob, time, pdb +import pandas as pd +import numpy as np +from netCDF4 import Dataset +from skimage.transform import resize +from skimage.transform import downscale_local_mean +from datetime import datetime, timedelta +from scipy import ndimage as nd +import rasterio + +def load_country_code_map(filepath,mapsize): + + src = rasterio.open(filepath) + country_code_map = src.read(1).astype(np.single) + country_code_map_refmatrix = src.get_transform() + src.close() + + # Check if country border raster covers entire globe + assert (country_code_map_refmatrix[0]==-180) & (country_code_map_refmatrix[3]==90) + + country_code_map = resize(country_code_map,mapsize,order=0,mode='edge',anti_aliasing=False) + country_code_map[country_code_map==158] = 156 # Add Taiwan to China + country_code_map[country_code_map==736] = 729 # South Sudan missing from map + country_code_map[country_code_map==0] = np.NaN + + return country_code_map + +def load_us_state_code_map(filepath,mapsize): + + src = rasterio.open(filepath) + state_code_map = src.read(1).astype(np.single) + state_code_map = resize(state_code_map,mapsize,order=0,mode='edge',anti_aliasing=False) + state_code_map_refmatrix = src.get_transform() + src.close() + + # Check if state border raster covers entire globe + assert (state_code_map_refmatrix[0]==-180) & (state_code_map_refmatrix[3]==90) + + return state_code_map + +def latlon2rowcol(lat,lon,res,lat_upper,lon_left): + row = np.round((lat_upper-lat)/res-0.5).astype(int) + col = np.round((lon-lon_left)/res-0.5).astype(int) + return row.squeeze(),col.squeeze() + +def rowcol2latlon(row,col,res,lat_upper,lon_left): + lat = lat_upper-row*res-res/2 + lon = lon_left+col*res+res/2 + return lat.squeeze(),lon.squeeze() + +def imresize_mean(oldarray,newshape): + '''Resample an array using simple averaging''' + + oldshape = oldarray.shape + + factor = oldshape[0]/newshape[0] + + if factor==int(factor): + factor = int(factor) + newarray = downscale_local_mean(oldarray,(factor,factor)) + + else: + factor = 1 + while newshape[0]*factor