Running the pipeline
The pipeline code is available in this repository.
To use the code, you have to clone the repository with git
:
git clone https://github.com/eqasim-org/ile-de-france
which will create the ile-de-france
folder containing the pipeline code. To
set up all dependencies, especially the synpp package,
which is the code of the pipeline code, we recommend setting up a Python
environment using Anaconda:
cd ile-de-france
conda env create -f environment.yml
This will create a new Anaconda environment with the name ile-de-france
.
To activate the environment, run:
conda activate ile-de-france
Now have a look at config.yml
which is the configuration of the pipeline code.
Have a look at synpp in case you want to get a more general
understanding of what it does. For the moment, it is important to adjust
two configuration values inside of config.yml
:
working_directory
: This should be an existing (ideally empty) folder where the pipeline will put temporary and cached files during runtime.data_path
: This should be the path to the folder where you were collecting and arranging all the raw data sets as described above.output_path
: This should be the path to the folder where the output data of the pipeline should be stored. It must exist and should ideally be empty for now.output_formats
: This should specify the formats of outputs. Available formats are csv, gpkg, parquet and geoparquet. Default value is csv and gpkg: [“csv”, “gpkg”].
To set up the working/output directory, create, for instance, a cache
and a
output
directory. These are already configured in config.yml
:
mkdir cache
mkdir output
Everything is set now to run the pipeline. The way config.yml
is configured
it will create the relevant output files in the output
folder.
To run the pipeline, call the synpp runner:
python3 -m synpp
It will automatically deshptect the config.yml
, process all the pipeline code
and eventually create the synthetic population. You should see a couple of
stages running one after another. Most notably, first, the pipeline will read all
the raw data sets to filter them and put them into the correct internal formats.
After running, you should be able to see a couple of files in the output
folder:
meta.json
contains some meta data, e.g. with which random seed or sampling rate the population was created and when.persons.csv
andhouseholds.csv
contain all persons and households in the population with their respective sociodemographic attributes.activities.csv
andtrips.csv
contain all activities and trips in the daily mobility patterns of these people including attributes on the purposes of activities.activities.gpkg
andtrips.gpkg
represent the same trips and activities, but in the spatial GPKG format. Activities contain point geometries to indicate where they happen and the trips file contains line geometries to indicate origin and destination of each trip.
Warning
Windows users :
The cache file paths can get very long and may break the 256 characters limit in the Microsoft Windows OS. In order to avoid any issue make sure the following regitry entry is set to 1 : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled
You should also set git into long path mode by calling : git config --system core.longpaths true
Mode choice
The synthetic data generated by the pipeine so far does not include transport modes (car, bike, walk, pt, …) for the individual trips as assigning them consistently is a more computation-heavy process (including routing the individual trips for the modes). To add modes to the trip table, a light-weight MATSim simulation needs to be performed. For that, please configure the additional data requirements as described in the procedure to run a MATSim simulation:
After that, you can change the mode_choice
entry in the pipeline configuration file config.yml
to true
:
config:
mode_choice: true
Running the pipeline again will add the mode
colum to the trips.csv
file and its spatial equivalent.
Population projections
The pipeline allows to make use of population projections from INSEE up to 2070. The same methodology can also be used to scale down the population. The process takes into account the marginal distribution of sex, age, their combination, and the total number of persons. The census data for the base year (see above) is reweighted according to those marginals using Iterative Proportional Updating.
To make use of the scaling, download the projection data from INSEE. Download Les tableaux en Excel which contain all projection scenarios in Excel format. There are various scenarios in Excel format that you can choose from. The default is the Scénario centrale, the central scenario.
Put the downloaded file into
data/projections
, so you will have the filedata/projections/donnees_detaillees_departementales.zip
Then, activate the projection procedure by defining the projection scenario and year in the configuration:
config:
# [...]
projection_scenario: Central
projection_year: 2030
You may choose any year (past or future) that is contained in the Excel files (sheet Population) in the downloaded archive. The same is true for the projection scenarios, which are based on the file names and documented in the Excel files’ Documentation sheet.
Urban type
The pipeline allows to work with INSEE’s urban type classification (unité urbaine) that distinguishes municipalities in center cities, suburbs, isolated cities, and unclassified ones. To impute the data (currently only for some HTS), activate it via the configuration:
config:
# [...]
use_urban_type: true
In order to make use of it for activity chain matching, you can set a custom list of matching attributes like so:
config:
# [...]
matching_attributes: ["urban_type", "*default*"]
The *default*
trigger will be replaced by the default list of matching attributes.
Note that not all HTS implement the urban type, so matching may not work with some implementations. Most of them, however, contain the data, we just need to update the code to read them in.
To make use of the urban type, the following data is needed:
Download the urban type data from INSEE. The pipeline is currently compatible with the 2023 data set (referencing 2020 boundaries).
Put the downloaded zip file into
data/urban_type
, so you will have the filedata/urban_type/UU2020_au_01-01-2023.zip
Then, you should be able to run the pipeline with the configuration explained above.
Filter household travel survey data
By default, the pipeline filters out observations from the HTS that correspond to persons living or working outside the configured area (given as departments or regions).
However, the national HTS (ENTD and EMP) may be very sparse in rural and undersampled areas.
The parameters filter_hts
(default true
) allows disabling the prefiltering such that the whole set of persons and activity chains is used for generating a regional population when set to false
:
config:
# [...]
filter_hts: false
For validation, a table of person volumes by age range and trip purpose can be generated from the analysis.synthesis.population
stage, as explained at the end of this documentation.
Exclude entreprise with no employee
The pipeline allows to exclude all entreprise without any employee (trancheEffectifsEtablissement is NA, “NN” or “00”) indicated in Sirene data for working place distribution. It can be activate via this configuration :
config:
# [...]
exclude_no_employee: true
INSEE 200m tiles data
The pipeline allows to use INSEE 200m tiles data in order to locate population instead of using BAN or BDTOPO data. Population is located in the center of the tiles with the INSEE population weight for each tile.
In order to use of this location,download the 200m grid data from INSEE. The pipeline is currently compatible with 2019 data set.
Put the downloaded zip file into
data/tiles_2019
, so you will have the filedata/tiles_2019/Filosofi2019_carreaux_200m_gpkg.zip
Then, activate it via the configuration :
config:
# [...]
home_location_source: tiles
This parameter can also activate use of BDTOPO data only or with BAN data to locate population with respectively building
and addresses
values.
Education activities locations
The synthetic data generated by the pipeline so far distribute population to education locations without any distinction of age or type of educational institution. To avoid to send yound children to high school for example, a matching of educational institution and person by age range can be activated via configuration :
config:
# [...]
education_location_source: weighted
For each educational institution, a weight is attributed in the pipeline based on the numbers of students provided in BPE data. The pipeline can also work with a list of educational institution from external geojson or geopackage file with addresses
as parameter value.
This file must include education_type
, commune_id
,weight
and geometry
as column with weight
number of student and education_type
type of educational institution code similar as BPE ones.
config:
# [...]
education_location_source: addresses
education_file: education/education_addresses.geojson
Income
This pipeline allows using the Bhepop2 package for income assignation.
By default, Eqasim infers income from the global income distribution by municipality from the Filosofi data set.
An income value is drawn from this distribution, independent of the household characteristics. This method is called
uniform
.
Bhepop2 uses income distributions on subpopulations. For instance, Filosofi provides distributions depending on household size. Bhepop2 tries to match all the available distributions, instead of just the global one. This results in more accurate income assignation on subpopulations, but also on the global synthetic population. See the documentation for more information on the affectation algorithm.
To use the bhepop2
method, provide the following config:
config:
income_assignation_method: bhepop2
Caution, this method will fail on communes where the Filosofi subpopulation distributions are missing. In this case,
we fall back to the uniform
method.