-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mutl databases (multi db) support #924
Comments
Following this! It's a showstopper for us, preventing updates beyond Django 3.0. We currently have an internal monkey patching workaround that works with Django <= 3.0 but I can't get it to work when they remove the multi_db parameter. |
Initial PR in #930. |
@bluetech I'm testing 4.3.0 specifically for the multi db support. It seems to work, kind of. It seems like this old issue describes what I'm seeing: #76 |
Awesome! Thanks for working on this, it looks great. I converted a multidb project over to the experimental API and it seems to be working, including flushing data between test runs. (I was previously using the workaround of It's also working correctly with pytest-xdist, which is very cool.
FWIW I definitely struggled with the lack of a fixture version. Just to spell out the issue you're talking about, with a single-db project I would do something like this:
With multi-db I would imagine wanting to do something like this:
(Not sure if that's possible to implement, but just as an example of how an API could work.) This would be convenient for fixtures in general, because otherwise it's easy to forget to remove
The workaround I found to get my doctests working was to add an autouse fixture so
That's a pretty handy thing to be able to opt into, so it might be nice to have a more official way to do it? But it still leaves the doctests less efficient than they could be otherwise, since they don't actually need access to all the databases, so I think a fixture version would still be useful. One other idea I had while doing the conversion -- it would be cool if there was some flag I could use to be warned about unused databases, so if I removed the Thanks again for pushing this forward! |
@jgb here's what I'm using successfully in case it helps. Packages:
Databases:
I started with adding just a simple test file before converting my actual tests:
Results:
I imagine if you can try an isolated test like that and narrow down the issue it might help bluetech when they get back to working on this. You might also look closely at your fixtures, maybe entirely disable them, and see if the isolated test starts working again -- it wouldn't be that surprising if an old multidb workaround in your fixtures is messing with this. |
Does this mean that if I have a multi-db project, I can not use pytest for tests? I have a legacy DB and I created an app with some models that correlates with tables in that legacy DB, and also I have created some endpoints with Django DRF (managed=False), so no migrations are done. |
Yes, I'm pretty sure we need some integration with fixtures here. Your suggestion should be doable; all we really need is to know the list of databases once the test is to be executed. I think the I'll definitely mull it over. Of course if someone wants to submit a PR with a proposal that would be possible as well.
Right, currently there is no way to add marks directly to doctests. This is pytest-dev/pytest#5794. I can't think of any clear way to support it either.
For the multi-db support, pytest-django depends almost entirely on the django.test code, so such a feature would probably have to go through Django. It might be possible to somehow track whether a connection for a database was used during a test, but I'm not sure. There are also bound to be a lot of false-positives (or rather, cases where you want to keep a DB anyway), so would definitely need to be off by default. |
It's supposed to be the other way around - previously you couldn't, now you can. If you configured the legacy database in your DATABASES then it should work. If you tried it and it didn't work, we'd need to know how it failed. |
Hi, @bluetech In this testapp/settings.py I have 2 databases. One is a legacy db, the one that will be queried by models unmanaged models in API app.
And then I have also a testapp/settings_test.py. In this file, I disable migrations, define sqlite databases and set managed=True for models.
If I run normal unit tests with "production" settings, it fails as expected failing because relations for unmanaged models does not exists. If I run using test settings , it work as expected. BUT, if I run using pytest.
Here is a gist with more code (models, factory, test, settings): https://gist.github.com/gonzaloamadio/14f935d96809299b7f1e9fb88a6e8e94 I put a breakpoint and have inspected DB. And also as expected, when I run with unittest suite, the ingredient table was there? But when run with pytest.. no ingredient table there |
One more comment. I have run unittest with verbose option. This is the output
So I found this solution : https://stackoverflow.com/questions/30973481/django-test-tables-are-not-being-created/50849037#50849037 Basically do in conftest what UnManagedModelTestRunner is doing. This solution worked for me @bluetech
|
@gonzaloamadio Right, pytest doesn't consider |
So I just stumbled upon this section on the docs. We are currently upgrading a multi DB Django project from Django 1.11 to Django 3.2 and also upgrading the pytest and pytest-django packages. I was not aware of all these changes, but for us it worked out of the box without any issues. The tests are passing without problems. So thank you for that! |
Multi DB is supported then? I am facing a strange issue when trying to execute queries using cursor within multiple DBs: Am I doing a bad use of the fixtures? |
Same problem here! Django DB fixtures don't work with the cursors generated by:
??? |
We ran into a problem when we added a second database to the Django settings. When running pytest it tried to create a database with a second We were able to fix it by specifying a specific database name for the test database in the settings (in |
What about accessing a non-default DB within a fixture, how can we do that? I know about the
|
What I did so far - in
However its creating both test database but all of my fixture is executing for Here is my fixture -
Do I need to make any other changes? |
solid multi db support would a huge win for us. we have multi tenant Django app touching a bunch of databases and ancillary databases as well |
Hey guys, thanks to all for all of your work in this project! I found my way to this thread while I was upgrading some packages and running the test suit:
suggestion For now I pinned pytest-django==4.2.0. Thanks again and have a great time! |
my case is even more complicated, I have two databases, one in MySQL, one in Postgresql, not sure how to pytest them |
Is @pytest.fixture(autouse=True)
def enable_db_access_for_all_tests(db):
pass |
I'm unsure how useful this will be for others, but if you're transitioning from a codebase where all of your tests inherit from a standard test class where you set `databases = "all", or if you use that pattern often, you should know:
|
Is the current support meant to include support for the MIRROR setting? Everything except that was working for me, and I also couldn't write the test data directly to the second database without permission errors, e.g. |
Is there any way where I can apply: @pytest.mark.django_db(databases=['default']) To all tests? I have hundreds of tests that are using the db using the "magic" def test_profile(db):
... Edit: Even after removing the 2nd DB from my |
i have same issue |
Is it possible to know which db a test that specifies multiple db's is intended to be running against? i.e. I've got two db's and the test runs twice, does the test know which db this run is for? |
Nice guys, Everything works, is there some way to avoid the database flushing?? |
Everything seems to be working for me! Great feature, IMO. |
Hi there all! Please, could anybody explain how to run a test for specific database from multi databases configuration? I have defined three databases in settings.py: 'default', 'postgres1' and 'postgres2'. The first one is default Django sqlite3 DB. The second one with read-only access and the last one is with read-write access. I need to check that my model is created in 'postgres2' database with read-write access. So, I wrote the test:
If I run the test, I get an error saying that there is no access to 'postgres1' database with read-only access (yep, there is no connection to postgres1, but I expect it will not be used). Thanks in advance! |
Does this work?
https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.TransactionTestCase.databases
https://stackoverflow.com/questions/38307523/test-isolation-broken-with-multiple-databases-in-django-how-to-fix-it
Set databases variable inside the test case
El mié, 7 feb 2024 a las 14:51, kkrasovskii ***@***.***>)
escribió:
… Hi there all!
Please, could anybody explain how to run a test for specific database from
multi databases configuration?
I have defined three databases in settings.py: 'default', 'postgres1' and
'postgres2'. The first one is default Django sqlite3 DB. The second one
with read-only access and the last one is with read-write access. I need to
check that my model is created in 'postgres2' database with read-write
access. So, I wrote the test:
import pytest
from my_app.models import MyModel
@pytest.mark.django_db(databases=['postgres2'])
def test_create_my_model():
MyModel.objects.create()
If I run the test, I get an error saying that there is no access to
'postgres1' database with read-only access.
What is wrong here?
Thanks in advance!
—
Reply to this email directly, view it on GitHub
<#924 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHVLY6YYAFCPS7IXR5UWULYSO5JDAVCNFSM44JQ4JGKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOJTGI2TONBZGUYA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
--------
Gonzalo Amadio
|
@kkrasovskii I suspect that you need to tell the ORM which DB to use for this model. Either via a DB router (https://docs.djangoproject.com/en/dev/topics/db/multi-db/#automatic-database-routing) or manually (https://docs.djangoproject.com/en/dev/topics/db/multi-db/#manually-selecting-a-database). Not sure why it is trying |
@gonzaloamadio, @mschoettle, than you so much for reply! The problem doesn't seem to be routing. My apps work fine with the databases: the problem with tests. django version 5.0.2, As I've mentioned earlier, three databases described in settings.py: sqlite3 as 'default' and two postgres databases ('postgres1', 'postgres2'). When I run the test, there is no connection to 'postgres1'. I rewrote the test the way @gonzaloamadio suggested: from django.test import TestCase
from my_app.models import MyModel
class MyTestCase(TestCase):
databases = ['default', 'postgres2']
def setUp(self):
MyModel.objects.create()
def test_something(self):
pass DB routing is done so that MyModel is written to base 'postgres2'. If I run
If I run the same command with 'postgres1' config removed from settings.py, the test is passing. |
I am not entirely sure if this will fix it but we had some issues with the test databases and had to add the following to each database connection setting to force using the correct database in tests: 'TEST': {
'NAME': f'test_{env("DATABASE_NAME")}',
}, |
One more solution in #1113 |
Hi, I ended up here when searching for a solution to use pytest with a memory based sqlite db backend, for tests in an application where the production db is postgres, since the application is largely db engine agnostic. The rationale being in-memory sqlite is blazing fast. The only hiccup being that a couple of my tests do require the postges db due to limitations in the sqlite engine. I naively thought this might be sufficient: # in tests/settings.py (which imports the application settings and overrides test specific config
...
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": ":memory:?cache=shared",
},
"postgres": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "...",
...
},
}
...
# in tests that invoke queries not supported by sqlite
@pytest.mark.django_db(databases=["postgres"])
def test_complex_querying(db):
... Unfortunately this results in django.test.testcases.DatabaseOperationForbidden: Database queries to 'default' are not allowed in this test.
Add 'default' to pytest_django.fixtures._django_db_helper.<locals>.PytestDjangoTestCase.databases
to ensure proper test isolation and silence this failure. My assumption here is that this potentially is due to the fact that the model being queried (which actually is constructed from a FactoryBoy factory) is somehow registered with the Now before I go down the rabbit hole to attempting to address that is there something else that I should be aware about w.r.t pytest-django's db setup ? For instance, IIUC, the db setup is session scoped, which means that if I create a Is this correct ? Is there a simpler way that I am not aware of ? TBH, the payoff for this "sqlite db with pg when required" is significant (integration test time down from 14m to 3m bar the 2 failing tests that fail due to the link above). So, I would really like to pursue this. Edit: I just thought of an alternate approach of using pytest Again, is there a simpler way to express this ? |
@lachlancannon any chance you've found a solution to this? I'm using two |
Wondering what the status is of this feature is. I have two databases, a 'default' and a read-only DB, and would like to be able to have 'default' work as normal in a TestCase, but have the read-only database be connected to an actual, existing db in postgresql, not a dynamically-created db for the test. (Background: I use the standard approach in Django of a using custom model class for models managed by the read-only DB, e.g.
...and then use a custom database router to know whether a model uses the 'default' or the read only DB.) However I'm unclear how to configure pytest and pytest-django to set up my TestCase so that it creates a test db normally for 'default' but links directly to the existing read-only database (and doesn't add "test_" to the name or try to build the db dynamically). I feel the answer might be spread out across multiple responses above but I'm not sure what it is or whether solutions above are still valid. Thanks for any help! (I'll remove this comment and put somewhere else if there's a support channel I'm missing. In that case, apologies for misplaced question.) |
AFAIK, it's still working fine.
I've always been told that's a big 'no-no', as it defeats a lot of the ideas behind testing - that it should be very isolated, reproducible and rely as little on any outside things as possible (ideally it would be no outside resources, which is why Mocks exist). Warning: lengthy reply. Skip to end for TL;DR. It's not always that simple though, I get that. Where I work, we have a use case where we have a few db's that are not owned by us - and we have unmanaged models mapping the data out - not unlike your case. We end up having pytest spin up a 'fake test db' (similar to the db we don't own), then we have a script that creates the dummy data in that database for the tests we're running (leveraging factories / factory-boy, There is also the undue burden on the systems. Hitting a live DB with requests might have a big impact on performance for other users currently using that db. It might cause other unforeseen issues. What if your test was written in such a way that it somehow manipulated the data from the db and saved it to the live db? No, thank you. The speed of sending those requests and the slow(er) replies when testing against a live db also are, in my opinion, reason enough to say 'no way'. These are just some of the many reasons people recommend not testing against a live db. And there are lots more. There are lots of solutions - you could mock the db response for the unmanaged models (giving you control, reproducibility, etc.), you could spin up randomized data using factory_boy like we do (again, bringing you all the benefits mentioned previously), you could use fixtures to fill the test_db with appropriate data similar to (if not exactly the same as) your real-life db. IMO fixtures are the most 'clunky' way to do it and get old fast - but it literally gives you an exact replica of your 'real life db' (completely with the existing data that exists in the db) for your local testing. There are so many ways to spin up the exact same data that is currently in your live db in the test environment using the test db's that it really should never be a consideration to test against the live db. Maybe someone will come along with some examples of testing against live dbs and a perfect reason as to why they do it - but I personally have yet to ever have found a single one, honestly. I'm happy to entertain the idea though - it's just that I personally have never found one. TL;DR: Don't test against live db's. Spin up your data in a local test_db and let pytest handle the creation/tear-down. Use scripts/libraries/fixtures to replicate your data in the local test db. |
@hannylicious Thanks for your thoughtful response. Sorry I should have clarified that the second db is read-only but doesn't need to be the live version. It's just a very static, kind of complex data store that is used as background data in a lot of operations spread widely across the main app. Also these are more of the integration tests than the unit tests...in the unit tests I think I've done a reasonable job of using patches and avoiding dependencies like this. In the past I've tried to make a lot of MagicMock objects and patching them here...and there...and dang over there too...but I found myself spending way to much time trying to figure out where to inject them in different places of the app and the tests were getting brittle. I also spent a lot of time in tests trying to reconstruct just a part of this very stationary, very non-changing read-only db. So I wanted to just create one real database that could be used in tests and then just set the tests up to connect to it. This db could act like fixtures that don't need to be loaded (and already relational!). And yes, a step away from that is your comment about writing scripts / fixtures to populate a portion or a complete representation of it. But if all of your read-only models are set up as unmanaged and you're using a router to decide between databases for each model, it becomes harder to populate the read-only test db in your tests. Now you need some kind of custom test runner to change things...maybe look for and iterate through this subset of models and change them back to managed, and maybe do some other things to get the test database migrated and ready. Also, your read-only Django models might only model a portion of the target db...yet there might be foundational data not reflected in those (facade-like) Django model classes that's needed in the target db for it to be relationally sound. Setting up the tests to do this makes the tests more brittle and hard to understand when you come back to them, when you really only need the unchanging, read-only db attached and ready for the integration test to run. But it may be I'll need to do the above as it's the proper way. If so, I haven't found a reference implementation for the current best practice on how to do this. Is there a gist or other resource you'd recommend that shows the latest way to handle this? It seems like decorators have changed and some of the earlier examples I've found aren't current. I can't seem to find one place that shows how to do this...and my limited understanding prevents me from putting them together in the right way. Thanks again for any thoughts. |
I agree with @hannylicious said, but if you still want to do it, I think you can use the |
That's not necessarily true. You could do something like this:
Given something like that - you can use 'SOME_ENV_VAR' to control whether or not it's a 'managed' model or not without the need for excess test-runners and things like that. You could add that into your core settings, your specific test settings, or even add it to the command at run time - however you want! Sure, it's a little wonky, but it works amazingly well and combined with the Even given your use case, I'm not seeing any particular reason or benefit as to why you should use a real persistent db for the read-only portion of things. IMO, if I were in your position - I would be spinning up whatever test-db as read-only (or not, doesn't matter, you just need some data for the test really...) and just using factory-boy or some scripts to create the data in there I need. Even if the data in the database is 'a lot', create a factory for your unmanaged models that contain all expected attributes and creation pieces and let it manage the creation of the data. You can get as complex or simple as you want! You can use things like Let's assume for the sake of discussion I needed some read-only test data about 2 kids in a school system. I need different kids, different schools, different school years, but same grades recorded in 2 classes - with both kids sharing the same last name of "BRICK" to recreate a bug that someone brought to our attention that only happens in that very specific occurrence. That might sound tricky and/or painful to setup - but using factory_boy?
Depending on how you setup your factories, it would/could create 2 report cards, tied to 2 different students with the last name of "BRICK", at 2 different schools, during different school years, etc. each with an "A" in science and a "B" in math. When you come back to it 6 months from now? Easily readable, easily understood. It would also create any myriad of other attributes/relationships you setup in a fashion to where the data is as randomized as you want or as specific as you want. All from that one line (if you setup the factories to do so). The mix of related models being created could be managed or unmanaged, factory_boy won't care - it's just putting the data into the db, in the related tables where they should be. And yes, it's wildly fast - using an in memory db it takes fractions of a second for this data to be setup. Assuming your test is short and sweet? The entire setup, test and teardown would be in most cases way, way faster. I wrote this article about it: https://hannylicious.com/blog/testing-django/ ; Yes, the article is out of date. Yes, the article has issues that need correcting and could be more clear. However, the over-arching ideas of quickly/easily creating test data for unmanaged models is still relevant. It's also how I personally prefer to solve the issue. In my experience, letting your test suite handle the db side of things is much faster, more maintainable, more manageable, requires less resources, has less barriers and gives you access to all the benefits that test libraries offer. I'm all for learning better ways to do things - so if there are times where a live read-only db would be appropriate, I'd love to hear about it! But from what you're describing? Again, in my opinion, I don't see any reason you can't just spin that data up at the start of your test using whatever test db you see fit or whatever db your test suite is designed to spin up and would still heavily recommend using the test-db your suite gives you and letting it do the maintenance of that. |
Just dropping in a quick comment here to mention that I've opened a ticket mentioning a specific issue I've had with the multi-db support. |
I looked at the tests in the project. looks like i was missing
Note that cc @lachlancannon @smohsenmohseni @SHxKM - not sure if this is what you wanted to know. |
Following advice above to work on creating fixtures for both DBs (thanks). My project uses a custom router to map models to the correct db, for sake of argument their names are "default" and "secondarydb".
When I run pytest, it seems like my custom database router is being called, but only the ...and since that method is not being used, Django is trying to create all model tables in the first db, even if the model is meant to be created in the second db. Is there a special configuration step in pytest or pytest-django to make sure the database router's Background: I have set up models that are related the second database so that they inherit from a base class like:
Unmanaged models are made manageable during tests via conftest.py:
And my router makes these read-only via
django==5.0.6 |
I was faced with a similar problem: having multiple databases, with one of them being read-only. My specific constraints are:
As a bit of background, I was, as far as I understand, in a similar situation to @danielmcquillen where there were many (many) mocks all over the place, and it was getting unwiedly, to put it mildly. So, to avoid mocking the entire application for each test case, I created a pluggable monkeypatch that leaves the database alone. First a global from collections.abc import Callable
from typing import Any
import pytest
from django.db.backends.base.creation import BaseDatabaseCreation
@pytest.fixture(scope="session", autouse=True)
def _fixture_leave_database_alone() -> None:
"""Allows running tests against multiple databases, where some databases are static and read-only.
Depends on `settings.DATABASES[db]["TEST"]["FOOBAR_TESTING_LEAVE_DATABASE_ALONE"]` being set,
which is also how this fixture is used.
The fixture is always present (autouse=True).
"""
def alone_leaver_decorator(func: Callable[..., Any]) -> Callable[..., Any]:
def alone_leaver(self: BaseDatabaseCreation, *args: Any, **kwargs: Any) -> Any:
if self.connection.settings_dict.get("TEST", {}).get("FOOBAR_TESTING_LEAVE_DATABASE_ALONE", False):
return self._get_test_db_name() # type: ignore[attr-defined]
return func(self, *args, **kwargs)
return alone_leaver
mp = pytest.MonkeyPatch()
mp.setattr(BaseDatabaseCreation, "create_test_db", alone_leaver_decorator(BaseDatabaseCreation.create_test_db))
mp.setattr(BaseDatabaseCreation, "destroy_test_db", alone_leaver_decorator(BaseDatabaseCreation.destroy_test_db)) And then modify the test subset of DATABASES = {
"my-read-only-database": {
"TEST": {
"MIGRATE": False,
"FOOBAR_TESTING_LEAVE_DATABASE_ALONE": True, # new custom setting
},
},
} Everything else is left as-is. This works by intercepting database creation, and potentially more importantly, destruction, with a custom decorator, which prevents the implementation from executing if you say so in the configuration. The fixture will "just work" because it's I believe the solution will work for all database providers, but if any override the methods themselves, the monkeypatched implementation obviously wouldn't be called, so they wouldn't take effect. For anyone eagle-eyed enough to notice that I didn't use pytest's monkeypatch fixture but instead created it myself: notice the scope of the fixture. Hope this helps anyone else reduce the number of mocks too. Footnote with versions:
|
My 2cts on this. I have 3 DB in my project. One of which is not managed by Django. What I do for now is:
then adding a IIRC this fixture pytest-django/pytest_django/fixtures.py Line 163 in 1ffc323
it's not possible to patch |
This issue replaces some historical issues: #76, #342, #423, #461, #828, #838, #839 (probably a partial list).
Background
Django supports multi databases. This means defining multiple entries in the
DATABASE
setting, which then allows directly certain queries to certain databases.One case is when an extra database is entirely independent, has its own migrations, setups etc.
Second case is when an extra database is readonly, only used for read queries, not managed by Django.
Third case is a readonly replica, for this Django provides the
MIRROR
settingDjango allows configuring the order in which test databases are set up.
Django's multi-db testing support
pytest-django mostly relies on Django's underlying
TransactionTestCase
andTestCase
classes for dealing with DB setups and such. Each pytest-django test gets run in a dynamically-generatedTestCase
/TransactionTestCase
.The main setting for mutli-db support is
TransactionTestCase.databases
. This tells Django which databases to consider for the test case. By default it's onlydefault
. It's possible to specify__all__
to include all databases.Historical note: The
TransactionTestCase.databases
attribute was added in Django 2.2. Before that amulti_db
attribute was used. pytest-django only supports Django>=2.2 so we happily don't need to concern ourselves with that.Previous attempts
#397 - Adds
multi_db=True
argument topytest.mark.django_db()
, addsdjango_multi_db
fixture. Problem: uses the oldmulti_db
attribute instead of thedatabases
attribute.#416 - Very similar to #397.
#431 - Adds
django_db_testcase
fixture which allows the user to completely customize the test case class, including settingdatabases
. Rejected for being too flexible, I'd prefer direct support for multi-db.#896 - Adds a global per-database setting for whether to add to the
databases
value or not. Rejected because I think it should be possible to customize per-test.Proposed solution
IMO we want something like #397/#416, but modernized to use
databases
instead ofmulti_db
. The fixture part would be a bit problematic because it's no longer just a boolean (fixture enabled/not enabled), but a list of database aliases. So some solution would be needed for that, or maybe only the mark would be supported.I'll try to work on it myself, but if for some reason I don't, PRs are definitely welcome!
The text was updated successfully, but these errors were encountered: