-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query an ObjectID field #22
Comments
How about |
Sorry ... |
Nice. I didn't know (or had forgotten) that feature existed. Except I over-simplified my example in trying to simplify the request. My bad. What I really want to do is the equivalent of:
|
Ah, hrm ... |
I was thinking maybe a custom JSON decoder that recognizes the string "ObjectId('...')" and actually deserializes that into a bson.ObjectId. That approach has collision problems (one can never use ObjectId('...') as an actual string). |
Thanks for following up on this. I'm not sure I'm following, but I don't see how your suggestion addresses my need. I want a way to query a collection for all documents created after a given date (assuming default ObjectId assignment for _id). We have a collection with millions of documents, and 'skip' isn't performant, so to limit the number of documents returned, I'd like to specify a simple GTE query on the objectid to limit the scope of the query. If the _id were a simple integer, I could write this in the query field: {"_id": {"$gte": 1394805636}} But because _id is an ObjectId, there's no syntax in mongs/JSON to provide the lower bound. |
I was thinking that if we added paging to the object view, and the paging paged through objects in the proper sort order, then by navigating directly to a single object, you would be able to click "Next" to advance through a listing of objects in ObjectId order. Setting aside performance concerns, would that address your use case? Of course, performance concerns are real. Also, this would only address simple use cases. Surely we can imagine a more complex query involving _id that this wouldn't enable. Another idea: can we infer the type of _id and cast the query value appropriately? |
oh. I see. So the paging would allow advancing to an adjacent object to one currently referenced. I can see how that might fit some use cases, but it won't suit ours because the primary goal is to limit the cursor scope to enable querying into the otherwise large set. I do see how if one had an exact object ID and the object navigation buttons, one could jump to that known object and see adjacent documents, which would be useful, so provides a fix for some use-cases.
Perhaps. Given the NoSQL aspect, it's impossible to guarantee the type of a given field for the whole collection, but one could use the heuristic that the first document is representative of the type. I'm a little averse to having mongs query the DB in order to infer how to query the DB. Here's another (slightly scary) idea - use jsonpickle to allow richer types. On second thought, that's a horrible idea because jsonpickle isn't safe for user input. Here's how it could work, though: query would look something like this:
Note, that's probably not proper jsonpickle syntax, but rather a pseudo-jsonpickle. While the syntax isn't very user-friendly, and it's mongs-specific, it does provide a mechanism to reliably solicit non-JSON types for queries. Mongs could do this safely by whitelisting allowed constructs. |
Maybe we should switch to XML? ;-) Remind me: is |
I believe we're not guaranteed that |
My inclination would be to implement a more general mechanism, even if that means using something as clumsy as XML. It'd be a shame to implement something for _id, and then the next user needs to query an ObjectId in his 'foo' field. That said, there's no sense over-engineering for use cases that don't exist. If inferring an ObjectId in the _id field is quick and easy, that would suit my use case. |
In the MongoDB shell, one can do:
I don't believe it's possible to do this query in Mongs. It would be nice if there were a way to do so.
The text was updated successfully, but these errors were encountered: