Remarkable Trees Paris – Advanced Pipeline¶
This notebook explores the distribution and characteristics of remarkable trees across Paris neighborhoods using data from Paris Open Data. The dataset, created in 2006 by the Direction des Espaces Verts et de l'Environnement - Ville de Paris, includes geo-located remarkable trees found in diverse locations such as gardens, cemeteries, streets, schools, and early childhood institutions. These trees are notable for their age, size, rarity, or historical significance.
The study maps these trees to their respective neighborhoods (quartiers) and enriches the data with the following neighborhood-level metrics:
- Count of remarkable trees: Total number of remarkable trees per neighborhood.
- Average circumference: Mean circumference of trees (in cm) per neighborhood.
- Average height: Mean height of trees (in meters) per neighborhood.
- Most common genus: The predominant tree genus in each neighborhood.
- Oldest plantation date: The earliest recorded plantation date per neighborhood.
- Summary of resumes: An LLM-generated summary (in English) of the combined 'Résumé' (summary notes) for all remarkable trees in each neighborhood.
- Summary of descriptions: An LLM-generated summary (in English) of the combined 'Descriptif' (detailed descriptions) for all remarkable trees in each neighborhood.
Through this pipeline, the notebook processes the data, applies spatial filters, and visualises the enriched metrics on interactive maps, offering insights into how remarkable trees are distributed and characterized across Paris.
#####################################################################################
# ⚠️ INFORMATION ABOUT THE CURRENT CELL ⚠️
# The following shows custom aggregation functions
# used later on in the pipeline
# Make sure to export your OPEN AI key as an env of your terminal's instance.
#####################################################################################
import pandas as pd
import ell
def most_common_genre(series):
if series.empty:
return None
mode = series.mode()
return mode.iloc[0] if not mode.empty else None
def oldest_plantation_date(series):
if series.empty:
return None
if not pd.api.types.is_datetime64_any_dtype(series):
try:
series = pd.to_datetime(series, errors='coerce', utc=True)
except Exception as e:
raise ValueError(f"Could not convert series to datetime: {e}")
return series.min()
@ell.simple(model="gpt-4")
def summarize_texts(texts: str):
"""You are a urban planner expert and to write summarisation text for urban offices of city councils."""
return f"Résumez les textes suivants de manière très concise, output tout en Anglais s'il te plait :\n\n{texts}"
def summarize_resumes(series):
if series.empty:
return None
combined_text = " ".join(series)
try:
summary = summarize_texts(combined_text)
return summary
except Exception as e:
print(f"Error generating summary: {e}")
return "Summary unavailable"
WARNING: No API key found for model `gpt-4` using client `str` at time of definition. Can be okay if custom client specified later! https://docs.ell.so/core_concepts/models_and_api_clients.html
#####################################################################################
# ⚠️ INFORMATION ABOUT THE CURRENT CELL ⚠️
# Some data wrangling are necessary due to the raw data being not
# computable enough hence the "manual" load to create a pre-processed
# version of the dataset
#####################################################################################
from urban_mapper import CSVLoader
import urban_mapper
file_path = "./arbresremarquablesparis.csv"
df = CSVLoader(file_path, "idbase", "idbase", separator=";").load()
df[['latitude', 'longitude']] = df['Geo point'].str.split(',', expand=True)
df['latitude'] = df['latitude'].str.strip().astype(float)
df['longitude'] = df['longitude'].str.strip().astype(float)
df.drop(columns=["Geo point"], axis=1, inplace=True)
df.to_parquet("./trees_paris.parquet")
mapper = urban_mapper.UrbanMapper()
mapper.table_vis.interactive_display(df)
--------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[2], line 14 11 import urban_mapper 13 file_path = "./arbresremarquablesparis.csv" ---> 14 df = CSVLoader(file_path, "idbase", "idbase", separator=";").load() 16 df[['latitude', 'longitude']] = df['Geo point'].str.split(',', expand=True) 17 df['latitude'] = df['latitude'].str.strip().astype(float) File <@beartype(urban_mapper.modules.loader.helpers.ensure_coordinate_reference_system.ensure_coordinate_reference_system.wrapper) at 0x7a4626b93a30>:12, in wrapper(__beartype_object_102464871387744, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/helpers/ensure_coordinate_reference_system.py:13, in ensure_coordinate_reference_system.<locals>.wrapper(self, *args, **kwargs) 12 def wrapper(self, *args, **kwargs) -> gpd.GeoDataFrame: ---> 13 loaded_geodataframe: gpd.GeoDataFrame = function_to_wrap(self, *args, **kwargs) 14 target_coordinate_reference_system: Union[str, Tuple[str, str]] = getattr( 15 self, "coordinate_reference_system", DEFAULT_CRS 16 ) 18 target_coordinate_reference_system = ( 19 target_coordinate_reference_system[1] 20 if isinstance(target_coordinate_reference_system, tuple) 21 else target_coordinate_reference_system 22 ) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/abc_loader.py:79, in LoaderBase.load(self) 62 @ensure_coordinate_reference_system 63 def load(self) -> gpd.GeoDataFrame: 64 """Load spatial data from a source. 65 66 This is the main public method for using `loaders`. It performs validation (...) 77 Examples: 78 """ ---> 79 loaded_data = self._load() 81 if self.additional_loader_parameters.get("map_columns") is not None: 82 map_columns = self.additional_loader_parameters.get("map_columns") File <@beartype(urban_mapper.modules.loader.loaders.csv_loader.CSVLoader._load) at 0x7a4626bc95a0>:12, in _load(__beartype_object_102464871387744, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/utils/helpers/require_either_or_attributes.py:32, in require_either_or_attributes.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs) 27 else: 28 raise ValueError( 29 error_msg 30 or f"At least one of the following attribute groups must be fully set: {attr_groups}" 31 ) ---> 32 return func(self, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/loaders/csv_loader.py:122, in CSVLoader._load(self) 100 @require_either_or_attributes( 101 [["latitude_column", "longitude_column"], ["geometry_column"]], 102 error_msg="Either both 'latitude_column' and 'longitude_column' must be set, or 'geometry_column' must be set.", 103 ) 104 def _load(self) -> gpd.GeoDataFrame: 105 """Load data from a CSV file and convert it to a `GeoDataFrame`. 106 107 This method reads a `CSV` file using pandas, validates the latitude and (...) 120 UnicodeDecodeError: If the file encoding is incorrect. 121 """ --> 122 dataframe = pd.read_csv( 123 self.file_path, sep=self.separator, encoding=self.encoding 124 ) 126 if self.latitude_column != "" and self.longitude_column != "": 127 if self.latitude_column not in dataframe.columns: File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1026, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend) 1013 kwds_defaults = _refine_defaults_read( 1014 dialect, 1015 delimiter, (...) 1022 dtype_backend=dtype_backend, 1023 ) 1024 kwds.update(kwds_defaults) -> 1026 return _read(filepath_or_buffer, kwds) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parsers/readers.py:620, in _read(filepath_or_buffer, kwds) 617 _validate_names(kwds.get("names", None)) 619 # Create the parser. --> 620 parser = TextFileReader(filepath_or_buffer, **kwds) 622 if chunksize or iterator: 623 return parser File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1620, in TextFileReader.__init__(self, f, engine, **kwds) 1617 self.options["has_index_names"] = kwds["has_index_names"] 1619 self.handles: IOHandles | None = None -> 1620 self._engine = self._make_engine(f, self.engine) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parsers/readers.py:1880, in TextFileReader._make_engine(self, f, engine) 1878 if "b" not in mode: 1879 mode += "b" -> 1880 self.handles = get_handle( 1881 f, 1882 mode, 1883 encoding=self.options.get("encoding", None), 1884 compression=self.options.get("compression", None), 1885 memory_map=self.options.get("memory_map", False), 1886 is_text=is_text, 1887 errors=self.options.get("encoding_errors", "strict"), 1888 storage_options=self.options.get("storage_options", None), 1889 ) 1890 assert self.handles is not None 1891 f = self.handles.handle File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/common.py:882, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options) 873 handle = open( 874 handle, 875 ioargs.mode, (...) 878 newline="", 879 ) 880 else: 881 # Binary mode --> 882 handle = open(handle, ioargs.mode) 883 handles.append(handle) 885 # Convert BytesIO or file objects passed with an encoding FileNotFoundError: [Errno 2] No such file or directory: 'arbresremarquablesparis.csv'
from urban_mapper.pipeline import UrbanPipeline
import urban_mapper as um
pipeline = UrbanPipeline([
("urban_layer", (
um.UrbanMapper().urban_layer
.with_type("region_neighborhoods")
.from_place("Paris, France")
.with_mapping(
longitude_column="longitude",
latitude_column="latitude",
output_column="nearest_quartier"
)
.build()
)),
("loader", (
um.UrbanMapper().loader
.from_file("./trees_paris.parquet")
.with_columns(longitude_column="longitude", latitude_column="latitude")
.build()
)),
("filter", um.UrbanMapper().filter.with_type("BoundingBoxFilter").build()),
("enrich_trees_count", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier")
.count_by(output_column="ramarquable_trees_count")
.build()
)),
("enrich_avg_circonference", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="circonference en cm")
.aggregate_by(method="mean", output_column="avg_circonference")
.build()
)),
("enrich_avg_hauteur", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="hauteur en m")
.aggregate_by(method="mean", output_column="avg_hauteur")
.build()
)),
("enrich_most_common_genre", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="genre")
.aggregate_by(method=most_common_genre, output_column="most_common_genre")
.build()
)),
("enrich_oldest_plantation", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="date de plantation")
.aggregate_by(method=oldest_plantation_date, output_column="oldest_plantation_date")
.build()
)),
("enrich_resume_summary", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="Résumé")
.aggregate_by(method=summarize_resumes, output_column="resume_summary")
.build()
)),
("enrich_description_summary", (
um.UrbanMapper().enricher
.with_data(group_by="nearest_quartier", values_from="Descriptif")
.aggregate_by(method=summarize_resumes, output_column="descriptif_summary")
.build()
)),
("visualiser", (
um.UrbanMapper().visual
.with_type("Interactive")
.with_style({
"tiles": "CartoDB Positron",
"tooltip": [
"ramarquable_trees_count",
"avg_circonference",
"avg_hauteur",
"most_common_genre",
"oldest_plantation_date",
"resume_summary",
"descriptif_summary",
"name"
],
"colorbar_text_color": "gray",
})
.build()
))
])
/home/docs/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/urban_layer/urban_layers/admin_regions_.py:103: UserWarning: Administrative levels vary across regions. The system will infer the most appropriate admin_level based on the data and division type, but you can (and is recommended to) override it with 'overwrite_admin_level'. warnings.warn(
/home/docs/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/urban_layer/urban_layers/admin_regions_.py:168: UserWarning: Inferred admin_level for neighborhood: 10. Other available levels: ['10', '6', '7', '8', '9']. You can override this with 'overwrite_admin_level' if desired. warnings.warn(
# Execute the pipeline
mapped_data, enriched_layer = pipeline.compose_transform()
~> Loading: loader...
|████ |
▁▃▅
1/10 [10%]
~> Loading: loader...
|████⚠︎ |
(!) 1/10 [10%]
--------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[4], line 2 1 # Execute the pipeline ----> 2 mapped_data, enriched_layer = pipeline.compose_transform() File <@beartype(urban_mapper.pipeline.pipeline.UrbanPipeline.compose_transform) at 0x7a4617a98e50>:13, in compose_transform(__beartype_object_102464871387744, __beartype_object_102464905225856, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/utils/helpers/require_attributes_not_none.py:35, in require_attributes_not_none.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs) 31 else: 32 raise ValueError( 33 f"Attribute '{name}' is None on {self.__class__.__name__}" 34 ) ---> 35 return func(self, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/pipeline.py:253, in UrbanPipeline.compose_transform(self) 230 @require_attributes_not_none("steps") 231 def compose_transform( 232 self, (...) 238 UrbanLayerBase, 239 ]: 240 """Compose and transform in one step. 241 242 Combines composition and transformation into a single operation. (...) 251 >>> data, layer = pipeline.compose_transform() 252 """ --> 253 return self.executor.compose_transform() File <@beartype(urban_mapper.pipeline.executor.PipelineExecutor.compose_transform) at 0x7a461847fa30>:13, in compose_transform(__beartype_object_102464871387744, __beartype_object_102464905225856, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/executor.py:202, in PipelineExecutor.compose_transform(self) 180 def compose_transform( 181 self, 182 ) -> Tuple[ (...) 187 UrbanLayerBase, 188 ]: 189 """Compose and Transform in One Step. 190 191 Combines compose and transform operations. (...) 200 >>> data, layer = executor.compose_transform() 201 """ --> 202 self.compose() 203 return self.transform() File <@beartype(urban_mapper.pipeline.executor.PipelineExecutor.compose) at 0x7a461847f640>:12, in compose(__beartype_object_134442905922752, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/executor.py:120, in PipelineExecutor.compose(self) 117 bar.title = f"~> Loading: {name}..." 119 if num_loaders == 1: --> 120 self.data = step.load() 121 else: 122 self.data[name] = step.load() File <@beartype(urban_mapper.modules.loader.helpers.ensure_coordinate_reference_system.ensure_coordinate_reference_system.wrapper) at 0x7a4626b93a30>:12, in wrapper(__beartype_object_102464871387744, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/helpers/ensure_coordinate_reference_system.py:13, in ensure_coordinate_reference_system.<locals>.wrapper(self, *args, **kwargs) 12 def wrapper(self, *args, **kwargs) -> gpd.GeoDataFrame: ---> 13 loaded_geodataframe: gpd.GeoDataFrame = function_to_wrap(self, *args, **kwargs) 14 target_coordinate_reference_system: Union[str, Tuple[str, str]] = getattr( 15 self, "coordinate_reference_system", DEFAULT_CRS 16 ) 18 target_coordinate_reference_system = ( 19 target_coordinate_reference_system[1] 20 if isinstance(target_coordinate_reference_system, tuple) 21 else target_coordinate_reference_system 22 ) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/abc_loader.py:79, in LoaderBase.load(self) 62 @ensure_coordinate_reference_system 63 def load(self) -> gpd.GeoDataFrame: 64 """Load spatial data from a source. 65 66 This is the main public method for using `loaders`. It performs validation (...) 77 Examples: 78 """ ---> 79 loaded_data = self._load() 81 if self.additional_loader_parameters.get("map_columns") is not None: 82 map_columns = self.additional_loader_parameters.get("map_columns") File <@beartype(urban_mapper.modules.loader.loaders.parquet_loader.ParquetLoader._load) at 0x7a4626bc9f30>:12, in _load(__beartype_object_102464871387744, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/utils/helpers/require_either_or_attributes.py:32, in require_either_or_attributes.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs) 27 else: 28 raise ValueError( 29 error_msg 30 or f"At least one of the following attribute groups must be fully set: {attr_groups}" 31 ) ---> 32 return func(self, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/modules/loader/loaders/parquet_loader.py:114, in ParquetLoader._load(self) 93 @require_either_or_attributes( 94 [["latitude_column", "longitude_column"], ["geometry_column"]], 95 error_msg="Either both 'latitude_column' and 'longitude_column' must be set, or 'geometry_column' must be set.", 96 ) 97 def _load(self) -> gpd.GeoDataFrame: 98 """Load data from a `Parquet` file and convert it to a `GeoDataFrame`. 99 100 This method reads a `Parquet` file using `pandas`, validates the latitude and (...) 112 IOError: If the Parquet file cannot be read. 113 """ --> 114 dataframe = pd.read_parquet( 115 self.file_path, 116 engine=self.engine, 117 columns=self.columns, 118 ) 120 if self.latitude_column != "" and self.longitude_column != "": 121 if self.latitude_column not in dataframe.columns: File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parquet.py:667, in read_parquet(path, engine, columns, storage_options, use_nullable_dtypes, dtype_backend, filesystem, filters, **kwargs) 664 use_nullable_dtypes = False 665 check_dtype_backend(dtype_backend) --> 667 return impl.read( 668 path, 669 columns=columns, 670 filters=filters, 671 storage_options=storage_options, 672 use_nullable_dtypes=use_nullable_dtypes, 673 dtype_backend=dtype_backend, 674 filesystem=filesystem, 675 **kwargs, 676 ) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parquet.py:267, in PyArrowImpl.read(self, path, columns, filters, use_nullable_dtypes, dtype_backend, storage_options, filesystem, **kwargs) 264 if manager == "array": 265 to_pandas_kwargs["split_blocks"] = True # type: ignore[assignment] --> 267 path_or_handle, handles, filesystem = _get_path_or_handle( 268 path, 269 filesystem, 270 storage_options=storage_options, 271 mode="rb", 272 ) 273 try: 274 pa_table = self.api.parquet.read_table( 275 path_or_handle, 276 columns=columns, (...) 279 **kwargs, 280 ) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/parquet.py:140, in _get_path_or_handle(path, fs, storage_options, mode, is_dir) 130 handles = None 131 if ( 132 not fs 133 and not is_dir (...) 138 # fsspec resources can also point to directories 139 # this branch is used for example when reading from non-fsspec URLs --> 140 handles = get_handle( 141 path_or_handle, mode, is_text=False, storage_options=storage_options 142 ) 143 fs = None 144 path_or_handle = handles.handle File ~/checkouts/readthedocs.org/user_builds/urbanmapper/envs/80/lib/python3.10/site-packages/pandas/io/common.py:882, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options) 873 handle = open( 874 handle, 875 ioargs.mode, (...) 878 newline="", 879 ) 880 else: 881 # Binary mode --> 882 handle = open(handle, ioargs.mode) 883 handles.append(handle) 885 # Convert BytesIO or file objects passed with an encoding FileNotFoundError: [Errno 2] No such file or directory: 'trees_paris.parquet'
# Visualise the enriched metrics
fig = pipeline.visualise([
"ramarquable_trees_count",
"avg_circonference",
"avg_hauteur",
"most_common_genre",
"oldest_plantation_date",
"resume_summary",
"descriptif_summary",
])
fig
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[5], line 2 1 # Visualise the enriched metrics ----> 2 fig = pipeline.visualise([ 3 "ramarquable_trees_count", 4 "avg_circonference", 5 "avg_hauteur", 6 "most_common_genre", 7 "oldest_plantation_date", 8 "resume_summary", 9 "descriptif_summary", 10 ]) 12 fig File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/utils/helpers/require_attributes_not_none.py:35, in require_attributes_not_none.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs) 31 else: 32 raise ValueError( 33 f"Attribute '{name}' is None on {self.__class__.__name__}" 34 ) ---> 35 return func(self, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/pipeline.py:274, in UrbanPipeline.visualise(self, result_columns, **kwargs) 255 @require_attributes_not_none("steps") 256 def visualise(self, result_columns: Union[str, List[str]], **kwargs: Any) -> Any: 257 """Visualise pipeline results. 258 259 Displays results using the pipeline’s visualiser. (...) 272 >>> pipeline.visualise(result_columns="count") 273 """ --> 274 return self.executor.visualise(result_columns, **kwargs) File <@beartype(urban_mapper.pipeline.executor.PipelineExecutor.visualise) at 0x7a461847fac0>:47, in visualise(__beartype_getrandbits, __beartype_get_violation, __beartype_conf, __beartype_args_name_keywordable, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/executor.py:229, in PipelineExecutor.visualise(self, result_columns, **kwargs) 206 """Visualise Pipeline Results. 207 208 Uses the pipeline’s visualiser to display results based on specified columns. (...) 226 >>> executor.visualise(result_columns="count") 227 """ 228 if not self._composed: --> 229 raise ValueError("Pipeline not composed. Call compose() first.") 230 visualiser = next( 231 ( 232 instance (...) 236 None, 237 ) 238 if not visualiser: ValueError: Pipeline not composed. Call compose() first.
# Save the pipeline
pipeline.save("./remarquable_trees_paris.dill")
# Export the pipeline to JupyterGIS for collaborative exploration
pipeline.to_jgis(
filepath="remarquable_trees_paris_with_llm.JGIS",
urban_layer_name="Remarquable Trees In paris analysis",
)
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 2 1 # Export the pipeline to JupyterGIS for collaborative exploration ----> 2 pipeline.to_jgis( 3 filepath="remarquable_trees_paris_with_llm.JGIS", 4 urban_layer_name="Remarquable Trees In paris analysis", 5 ) File <@beartype(urban_mapper.pipeline.pipeline.UrbanPipeline.to_jgis) at 0x7a4617a99510>:13, in to_jgis(__beartype_args_name_keywordable, __beartype_object_134442905922752, __beartype_get_violation, __beartype_conf, __beartype_check_meta, __beartype_func, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/utils/helpers/require_attributes_not_none.py:35, in require_attributes_not_none.<locals>.decorator.<locals>.wrapper(self, *args, **kwargs) 31 else: 32 raise ValueError( 33 f"Attribute '{name}' is None on {self.__class__.__name__}" 34 ) ---> 35 return func(self, *args, **kwargs) File ~/checkouts/readthedocs.org/user_builds/urbanmapper/checkouts/80/src/urban_mapper/pipeline/pipeline.py:477, in UrbanPipeline.to_jgis(self, filepath, base_maps, include_urban_layer, urban_layer_name, urban_layer_type, urban_layer_opacity, additional_layers, zoom, raise_on_existing, **kwargs) 472 raise ImportError( 473 "jupytergis is required for this functionality. " 474 "Install it with `uv add jupytergis`." 475 ) 476 if not self.executor._composed: --> 477 raise ValueError("Pipeline not composed. Call compose() first.") 479 if filepath and os.path.exists(filepath): 480 if raise_on_existing: ValueError: Pipeline not composed. Call compose() first.