Replies: 1 comment
-
@thobai pushing the aggregation to the service is complicated with the current architecture, since the SQLite virtual table interface treats tables as "dumb" sources of data, passing only the columns and predicates. One solution would be to create pseudo-columns, eg: SELECT a_sum FROM my_service And the adapter would translate that to Filtering with custom functions is also hard for the same reason. You could try something like the dbt MetricFlow adapter does, and parse the SQL, see: Then extract the relevant filters, pass them to the service. You would also need to register the function (as a no-op) with SQLite, for the query to be valid. I've been planning to write a sqlglot backend, that might give us more control of things like this. But we'd have to rething the current interface for adapters, probably. If you want to write your own DB API 2.0 I have some code that I wrote last month that is a great starting point. All you need to do is implement the method that fetches the data from the service. You can find it here: https://github.com/betodealmeida/omnidb/tree/main/src/omnidb/dbapi |
Beta Was this translation helpful? Give feedback.
-
I have a RESTful service that is able to aggregate data in different ways (sum, average, median, group by, ...). How easy would it be to extend shillelagh to support forwarding the aggregations to the service instead of getting the (filtered) raw data and running it on the DB? As I'm dealing with large (or even big) data, getting the raw data out is not an option. I would also need to be able to run non-standard functions for filtering (e.g.
SELECT sum(a) FROM my_service WHERE my_udf(b, c)
), which are executed by the service.As an alternative, I'm thinking about "simply" creating my own DB API 2.0 implementation for my service.
What do you think would be easier?
Beta Was this translation helpful? Give feedback.
All reactions