-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build a cache of Postgres system objects #185
Comments
@greggailly Regarding our discussion under #251. WHERE (nspname != 'pg_catalog' AND ... What needs to be done as a first step is to remove that first part of that WHERE clause. That will result in retrieving everything in the Once this change is made, I would test it against an empty database to see what Hope this helps. |
@jmafc do we have any idea how big of a lift doing this would be? |
"this"? If by "this" you mean what Daniele suggested (which you have essentially done) and fixing loose ends such as the specific problem of "misplaced" functions, I suspect it may not be too difficult to come up with some fix or general workaround. But (there's always one of those), there could be something more ragged. However, the problem of #165 may be easier to solve now than when the issue was first opened, because in the interim we have done away with treating schema |
@jmafc I'm asking specifically about having a cache of all Postgres system objects, sounded like there were a lot of gotchas there, and this issue has been open a long time. Sorry if I'm being dense and not understanding |
No, I'm the one who needs to apologize. I thought your comment was under issue #244 so I thought "this" meant fixing that issue. This, i.e., the cache, is probably like lifting 100 kg or more. Looking at one of my medium sized databases where |
Is this something you have a clear vision for, or is the above description about as far into planning/design as it's gotten? |
I may have had a "vision" five years ago, but it seems rather blurry right now, not only due to the passage of time, but also to other changes and the fact I haven't really been thinking a lot about Pyrseas in the meantime. As far as I can recall, the idea was to read in all the catalogs and build a cache, either in memory or YAML file, so that, for example, when we found a At this point, in fact, if I were to start trying to implement something I'd start by building a test case for #175 and use TDD to implement the minimal stuff necessary to get the test to pass. Then based on what I may have learned from that exercise, I would try to generalize. |
Issue #175, the
test_operclass.py
tests that were markedxfail
in the last phase of dealing with #176 and issue #183 (probably also #184) all point to deficiencies in Pyrseas knowledge of PG system types and functions, such asinteger
andtsvector_concat
. These objects reside in thepg_catalog
schema and Pyrseas has always avoided fetching those objects.I doubt that more than a few users CREATE OPERCLASSes on something like the
pg_catalog.integer
type as shown in thosexfail
tests. Maybe a sophisticated extension like PostGIS does, but they're an exception. OTOH, usage of ts_vector functions is more likely as seen in #175 and in sometest_trigger.py
tests that usetsvector_update_trigger
. Issues #183 and #184 request display of attribute information that may go beyond the PG system types, since there may be user defined record-types (table row-types).The proposal is to retrieve very basic information, probably just oid and name, at initialization, so that it can be used in subsequent processing. The precedent for this is Post-Facto, but the difference is that whereas PF was connected live to both source and target databases,
yamltodb
only has access to the target oids. If we limit it to system types, it is highly unlikely that the oid fortsvector_concat
in one database will differ from the other.The text was updated successfully, but these errors were encountered: