Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streamify import and process osm data task #1214

Merged

Conversation

GabrielBruno24
Copy link
Collaborator

Change the import and process osm data task so that it reads the source files with an asynchronous stream. Also adds an option in one of the functions used by the task so that it will not halt if just one geojson is missing compared to the raw OSM data.

Add new classes inherent from DataGeojson and DataOsmRaw.
Instead of reading the whole file at once, these classes will stream it piece by piece asynchronously, allowing for large files to be read without crashing the application.
Copy link
Collaborator

@tahini tahini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if I am mistaken in my review, but I'm not sure the intended goal of streaming the operation works correctly.

console.log('Start streaming GeoJSON data.');
const readStream = fs.createReadStream(this._filename);
const jsonParser = JSONStream.parse('features.*');
const features: GeoJSON.Feature[] = [];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, you seem to fill the features array with the file content. If the file content is too big for memory, so will be the features array, no? Ideally, each feature should be "processed" (whatever processed means to the consumer of this class) and dropped after processing to avoid filling the memory.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because the main original limitation was that the whole file was put into a big string and this had a max size.

Yes, this way takes a lot of memory, but Node seem to be able to cope since it's in a small chunk. We will need to refactor that further, but I don't think we have time for that at the moment.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so it is less limited than it used to be and an intermediary step for later when all is in postgis? fair enough. then just fix the lowercase 'c' in the function name and it's good

Copy link
Collaborator

@greenscientist greenscientist Jan 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll wait for confirmation from @GabrielBruno24 that it actually work on a region as big as Montreal.
Would be interesting to see if there's a limit.
(And maybe some stats on memory usage while it runs)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It takes forever because the time is O(N^2) but it does work if you give node enough memory yes. It would work better if all the write actions were part of a stream like I did with task 1 and 1b but the logic here is a lot more complex, so it would be better to just rewrite the function from scratch.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is enough memory? (ballpark)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About 8 GB

Copy link
Collaborator

@greenscientist greenscientist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few small things on my side

console.log('Start streaming GeoJSON data.');
const readStream = fs.createReadStream(this._filename);
const jsonParser = JSONStream.parse('features.*');
const features: GeoJSON.Feature[] = [];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is enough memory? (ballpark)

Modifies the function getGeojsonsFromRawData() so that it accepts a new option parameter, continueOnMissingGeojson.
When generateNodesIfNotFound is false and continueOnMissingGeojson true, the function will just go to the next geojson and print a warning if a feature that is in the osm raw data is not in the geojson data.
Previously, the only option was to throw an error and interrupt the process.
@GabrielBruno24 GabrielBruno24 force-pushed the streamifyImportAndProcessOsmDataTask branch from e4072fd to 218486a Compare January 27, 2025 17:23
@greenscientist
Copy link
Collaborator

looks good to me

// The proper way to do this would be to do it in getData(), but making that method async would force us to modify every class this inherits from, so we use this factory workaround.
// TODO: Rewrite the class from scratch so that it accepts an async getData().
static async create(filename: string): Promise<DataStreamGeojson> {
const instance = new DataStreamGeojson(filename);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idéalement, avec l'approche factory, les données auraient pu être lues avant d'appeler le constructeur, ainsi l'objet aurait toujours été initialisé. Mais comme on va réécrire tout ça from scratch bientôt, ça va être correct. Le commentaire est plus pour une autre éventuelle factory que tu aurais à écrire un jour futur.

@tahini tahini merged commit d673432 into chairemobilite:main Jan 27, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants