-
Paper 3 - Session title: New Era for Open Science
10:15 Sentinel Hub - next generation platform for Sentinel-2 applications
Milčinski, Grega; Šernek, Bojan; Kadunc, Miha; Batič, Matej; Kolarič, Primož; Repše, Marko; Močnik, Rok Sinergise d.o.o., Slovenia
Show abstract
Sentinel Hub - next generation platform for Sentinel-2 applications
Author: Grega Milčinski, Sinergise
Co-authors: Bojan Šernek, Miha Kadunc, Matej Batič, Primož Kolarič, Marko Repše, Rok MočnikSentinel-2 data are being distributed since November 2015. However, there are still not many publicly available applications, based on these data. The reason probably lies in technical complexity of using S-2 data, especially if one wants to use full potential of multi-temporal and multi-spectral imaging. Vast volume of data to download, store and process is technically challenging, even more so using "current-generation" principles, such as building pyramids of final products, resulting in billions of small files, which have to be managed efficiently.
At Sinergise, we have approached the problem from a different angle - thinking about which products/services we can offer without significant pre-processing, tiling or even manual effort. We have set-up the copy of global S-2 archive at AWS [1] (the data are accessible to any interesting party there free of charge) and implemented a processing chain, which taps into the data in real-time, when requests are coming in. We were able to establish WMS/WMTS service [2], which gets request with specific parameters (e.g. AOI, maximum cloud coverage, time range), queries the database for matching scenes, downloads the relevant data from AWS, creates a composite, a mosaic and then the final result, based on set rendering parameters (e.g. true color, false color, NDVI, EVI, LAI, etc.). All of these actions are being done on full, worldwide, S-2 archive (multi-temporal and multi-spectral) in just a few seconds. To demonstrate the technology, we created a simple web application, called "Postcards from Space" [3], which makes it possible to query Sentinel-2 data anywhere in the world.
Once the main processes are established, it is relatively easy to build new applications on top of it. We will present the "Monitoring facility" [4], which makes it possible to compare changes at specific area and create time-lapses. Notification service, letting users know, where new data are available is available as well. We plan to add some additional features before the conference, such as cloud removal, change detection service and support for Landsat data. We will also establish an OGC-standard WCS service, which will make it even easier for other developers to integrate in their tools.
Sentinel-2 data are only as useful as the applications built on top of it. We would like people to not bother too much with basic processing and storing of data but rather to focus on value added services. This is why we are looking for partners, who would bring their remote sensing expertise and create interesting new services. This should be even easier once ESA's optical cloud platform, IPT Poland, is established. We will migrate our services there to support easier access to the data to other app developers.
At EO Open Science we would like to present the platform and its capabilities to invite all participants to make use of it.
[1] http://sentinel-pds.s3-website.eu-central-1.amazonaws.com/
[2] http://www.sentinel-hub.com/apps/wms
[3] http://www.sentinel-hub.com/apps/postcards/
[4] http://www.sentinel-hub.com/apps/sentinel-monitoring
[Authors] [ Overview programme] [ Keywords]
-
Paper 7 - Session title: New Era for Open Science
10:30 Using Computer Vision and Machine Learning Solutions for Automatic Image Classification of Crew Earth Observations (CEO) Imagery
Lee, Janice HX5-Jacobs JETS Contract, NASA Johnson Space Center, USA
Show abstract
The Earth Science and Remote Sensing (ESRS) Unit here at NASA Johnson Space Center provides science and mission support for the Crew Earth Observations (CEO) Facility onboard the International Space Station (ISS). Earth imagery obtained through this program is captured by astronauts using digital handheld cameras on a daily basis, 5 days a week, under different lighting conditions, viewing angles and at varying resolutions. Over the years, our database has grown to over 3 million images, which includes legacy imagery from the Mercury mission onwards. Our current approach to managing our data is to analyze each image individually to identify a geographic centerpoint coordinate and any visible features; as this is a manual process, it has only been applied to a fraction of the total dataset. We have recently implemented data science solutions to increase efficiencies in data management and to add value and accessibility to our imagery database. We are using computer vision algorithms (3D histogram matching, k-means clustering and image fingerprinting) to construct a classification taxonomy to better identify the range of content in our database and to build a CBIR (Content Based Image Retrieval) search tool to enable facilitated access to our imagery. We are also employing a machine learning (random forest classifier) algorithm to automatically classify our imagery and potentially enable automated feature recognition.
The computer vision algorithms center on quantifying the content of each image via the generation of feature descriptors such as 3D color histograms or image hashing/fingerprinting. From there, a cluster analysis function can be applied, in order to partition the datasets; or a distance function can be applied, to identify similarities within the dataset. Both of these algorithms can be similarly utilized to drive an image search engine and enhance public access.
The machine learning algorithm classifier involves training an algorithm to ‘learn’ different characteristics of the content of our imagery dataset and to use decision trees and random forest classifier logic to determine the category to which each image belongs. A dataset of over 10,000 known images were used to train the classifier, from which the algorithm is able to identify the most probable classification to assign to each target image.
The promising results we have achieved thus far (classification accuracy of 83%, precision of 0.62 and recall of 0.75) highlight the potential that these tools can offer in facilitating with data extraction and data mining from a large and varied dataset, as well as creating options for increased data accessibility. Many of the resources used in these projects were open source and are freely available through web based repositories such as GitHub. Optimizing our data processing has helped to add value to our dataset and has opened up avenues for collaboration and data visibility.
In addition, these efforts are in alignment with NASA’s Open Data policy, which has established objectives to increase access to the results of scientific research and to optimize archival and dissemination of data and science products. Our intent has been to implement effective and scalable methods to better access, manage and analyze our data and to contribute to the current Open Science movement.
[Authors] [ Overview programme] [ Keywords]
-
Paper 56 - Session title: New Era for Open Science
10:00 Ramani Huria: Mapping for Flood Resilience
Iliffe, Mark (1); Anderson, Edward (2) 1: University of Nottingham/World Bank; 2: World Bank
Show abstract
This paper examines how maps can be generated of the most fragile, informal areas of our global society, through a case study of Ramani Huria, a project that has mapped over 1.3 million people in Tanzania by and for the community themselves.
Ramani Huria is a consortium of the Tanzanian Commission of Science and Technology, University of Dar es Salaam, Ardhi University, Buni Innovation Hub, City Council of Dar es Salaam, supported by the World Bank and Red Cross. Its aim is to identify flood prone neighbourhoods of Dar es Salaam, create community level flood scenarios, leading to enhanced awareness and resilience to flooding in Dar es Salaam.
So far Ramani Huria has mapped around 1.3 million people in Dar es Salaam, mapping everything from sanitary facilities to drainage, with over 300km of drains and other flood relevant features captured. The project has become a platform for innovation for resilience supporting different themes from transport accessibility to its core mission of community risk and resilience. All data collected through the project is openly available, in support and spirit of the Government of Tanzania’s commitment to Open Data.
Ramani Huria is unique in that the map data is collected by community members and local university students. So far over 250 contributors have generated map data over the past 9 months. This is a novel engagement in Volunteered Geographic Information (VGI) in emerging nations, as the data collection is collected on the ground as opposed to remote mapping exercises common to other humanitarian mapping endeavors.
Through the focus of Ramani Huria, the majority of the mapping is conducted in informally planned urban developments, more commonly known as ‘slums’. Through mapping these areas, a vibrant map can be created demonstrating the business and activity in these misunderstood areas, while illuminating community driven objectives and challenges.
This project has also offered a platform for local innovation in Earth Observation, fusing the use of satelite and UAV derived imagery. These come from various satelites from Landsat, Worldview-3 and plans to utilise available open data streams from Sentinel 2. These data stream subsequently enable the creation of flood inundation models, leading to improved community resilience to flooding and other hazards.
In presenting the methodology of Ramani Huria, this paper examines the contribution to the theory of VGI and cartography through mapping the fastest growing city in Africa. Ultimately, this sets an agenda for future research and exploration in how cartography and earth observation can be democratized by citizens, even those most vulnerable in our global society.
[Authors] [ Overview programme] [ Keywords]
-
Paper 59 - Session title: New Era for Open Science
10:45 EOLib: An Image Information Mining Project
Datcu, Mihai; Espinoza-Molina, Daniela; Dumitru, Octavian; Schwarz, Gottfried; Reck, Christoph; Manilici, Vlad German Aerospace Center (DLR), Germany
Show abstract
The abundance of available satellite images calls for their automated analysis and interpretation, including the semantic annotation of discovered objects as well as the monitoring of changes within image time series. A common approach is to cut large satellite image into contiguous patches and to classify each patch separately by attaching a semantic patch content label to it. In this context, the selected patch size is a critical parameter, as patches being too large may contain multiple objects and patches being too small may not be understandable due to missing contextual information. This approach has been embedded into an interactive active learning and exploitation environment within the ESA-funded EOLib project. The software of EOLib allows automated image data ingestion, feature extraction, and semantic image content annotation supported by interactive visualization tools.
We report about our experiences with medium and high resolution Synthetic Aperture Radar (SAR) and optical multispectral image classification when using such an active learning approach. The most important phenomenon is the impact of image resolution. The higher the resolution, the higher the number of discernible land cover categories, in particular for built-up areas and industrial sites where we can see and interpret the impact of distinct human-made activities. Here, the discernible land cover categories depend on the actual image resolution. This becomes apparent when we compare the same target areas acquired by different space-borne SAR sensors (e.g., Sentinel-1A versus TerraSAR-X). In addition, it turns out that several country-specific regional surface cover categories can be trained and retrieved with SAR images that often appear differently in optical satellite images; however, any increase in classification accuracy has to be paid for by higher computational effort.
Thus, EOLib represents an approach for future ground segments whose functionality will no longer be limited to the mere generation of level 1,2, or 3 products, but will include automated and user-friendly image content analysis and annotation.
[Authors] [ Overview programme] [ Keywords]
-
Paper 69 - Session title: New Era for Open Science
09:30 KeyNotes A1
McGlade, Jacqueline UNEP, United Kingdom
Show abstract
KeyNotes A1
[Authors] [ Overview programme] [ Keywords]
-
Paper 70 - Session title: New Era for Open Science
09:45 KeyNotes A1(2)
McGlade, Jacqueline UNEP, United Kingdom
Show abstract
Keynote Speaker
[Authors] [ Overview programme] [ Keywords]