-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need someone to take over Subsonic #285
Comments
The project has been stable and happy for a while now I think, apart from the issues with the various documentation sites. People have gone off on their various forks. I've got commit to both 2 and 3, and am happy to do housekeeping, but not the time to go on a issue-fix spree on v3 (they're mostly hard to fix niche bugs listed there now anyway). I have been thinking about stripping some elements back to the metal to make them more shareable and re-usable, and starting my own reincarnation of the project as an SQL manipulation and generation tool. There's a lot of code-hours in there worth saving, particularly in the cross-database compatibility area. |
SubSonic-Core is a spiritual successor considering 50% of the code base is from this project. I am actively working on it but, not much can be done quickly when it's just one developer. |
I jumped ship to PetaPoco a long time ago. They use the SubSonic class templates. |
what I have been working on can insert 10000 records in less than a second and returning db generated data like keys and computed columns. same operation using the more traditional way of each field having it's own parameter and inserting one at a time takes about 28 seconds. I am all for subsonic 3 being archived, it was a really good place to start from even though I removed 50% of the code and rebuilt the expression writer from the ground up. |
That's impressive. Do you just manually construct a single large insert statement, rather than the individual ones, or do you use some kind of bulk insert functionality. I wasn't aware that the parameterised approach was much slower than the constructed SQL approach, at least for individual inserts, but if you can do many rows in a statement that would save on a lot of overhead. |
I make use of user defined table types on the db side. the insert statement use's one parameter and it is a data table. if only, SqlServer treated table definitions like user defined table types. You would not have to define the table a second time as a UDTT. the records are moved to the database by reference, and so long as I can ensure a synchronous operation. I can then read them back out with keys and computed column data. My testing has proven this is the case with Sql Server and special query hints are required to trigger parallelization of an ETL insert. bulk insert performance, with out the bulk part. generated a insert statement which does the following.
|
Interesting! So, is this at all portable to other RDBMSs? |
yes it can be implemented in other RDBMS. you just need to support UDTT mapping and then use said mapping to initialize a db parameter for a structured type and then pass a data table as the value. |
True, but I don't think anything else supports UDTT mapping. This looks like a MS-specific integration thing. As far as I can tell, MySql, Sqlite, Oracle and Postgres don't support UDTT. |
oracle supports UDT but they have gone much further than SQL Server and can actually map a UDT to a custom .net type. They can be bound to the OracleParameter in/out. Oracle Supports User-Defined Types (UDTs) and .NET Custom Types I am not sure about my sql. |
OK, how about the performance of that? Does it offer similar performance improvements, or is it just sugar? What you've done looks valuable in the specific use case where MSSQL is the back end and you need to write a lot of data quickly. I note there is almost no Wiki (instructions). However, (and I'm not trying to be critical at all here, I'm just being brutally realistic as to what is possible, with a view to your code being adopted as widely as possible), I really do think you're going to have a very narrow audience in terms of getting your code out there. For example, I use PetaPoco against MSSQL, MySQL and SqLite. While performance improvement in MSSQL would be a good thing, I would not be willing to trade interoperability with the other database systems for it. Nor would it be attractive to fragment my solutions by using a different tool for MSSQL. So if you were keen for wider adoption, I suspect you would find it fairly easy to adapt your code into a PetaPoco plugin, as it has a similar origin - I can't guarantee this, but it's at least worth a look. There are a small number of other alternative .NET Micro ORMs, or you may also be happy to stay with what you've got. I suppose this thread serves as a pointer to the repo: https://github.com/SubSonic-Core/SubSonic.Core |
what I have envisioned for the DAL is that the core data access layer does not care about the actual database server. I have stubbed out projects for an Oracle, MySql and SqlServer extensions, which are responsible for communicating with the database back end. Yes, I know the documentation is lacking. I have not been the best at creating documentation. The plan however is when I know a feature in the core data access layer applies only to one RDBMS. The feature will be moved into the extensions project for said RDBMS. Then the core will be refactored to account for that. |
I just don't have the time, and many people love this project. If you're interested, please ping me.
The text was updated successfully, but these errors were encountered: