Hi,
The answer to my question may depend on both the sql server engine and the data provider; I do not know enough about the internal goings on to judge this. Thus, I should mention that we are connecting to SQL Server 2000, from an ASP.NET 1.1 application using the SqlClient provider.
First a short introduction to my scenario: We have an application that needs to support exporting and importing data between different instances of the database (in a disconnected manner). This has been implemented in such a way that the export generates sql, which is then simply executed on the importing side. (It wasn't me! I'm fully aware of a number of reasons why this solution isn't any good, but this is what I have to work with.) This is done by reading the entire file into a string, and then calling a method called ExecuteNonQuery(sql) on the SqlConnection object.
Now I'm faced with a customer whose import fails, unfortunately without any error information; there are no error messages, the data just doesn't show up anywhere. (Again, it wasn't me. My first course of action is adding try-catch blocks and some logging so we can, hopefully, get out of the dark when things go haywire.)
I had a brief look at the list of files they were trying to "import" (i.e. execute), and it turns out one of them is over 80 MB! That means our application is actually trying to pass a 80MB string to ExecuteNonQuery. I wouldn't be too surprised if it's not designed for that...
In any case given that we've taken this approach I need to know what limitation the model imposes on the size of each script file, if any. A potential quick fix could be to modify the export logic so it splits the scripts into several files. (That would at least beat trying to split on the importing side, which would require reliably parsing the script to determine where it could be broken up!)
Thanks in advance to anyone who can provide more information. If anyone feels like they have a decent basis for proposing a long-term course of action (more specific than just "redesign the import/export logic") feel free to suggest and elaborate as much as you'd like!
Happy coding,
Dag
With a script of that size, it could be that it is attempting too many joins, too many table references, etc., if I understand the situation correctly.
Is this something you can copy and paste into QA? I read it as it generates SQL script which is then executed on the other side...maybe I'm wrong.
|||The script is not a single query, but a large collection of queries. However, the entire content of each file from the export is sent from the application to SQL server in a single round trip - there is no parsing on the import side.
The statements themselves are very simple, operating on a single row to be deleted from or inserted into a table. But there may then be tens of thousands, possibly hundreds of thousands, of INSERT statements in the file.
Anyway you don't need to think of this in the context of our import/export logic, the precise question is "is there a limit on the length of SqlCommand.CommandText?", or, alternatively, "is there a limit on the number of queries that can be executed in a single batch?"
|||I once had a project with a situation similar to yours.
Back then I had problems with 'Go' statements between inserts, for instance the following lines would work:
insert into table () values ( ) ; insert into table () values ()
But the following would not - it would generate an exception on ExecuteNonQuery:
insert into table () values () GO insert into table () values ()
Back then we decided to use osql to run the queries out-of-process, instead of using ExecuteNonQuery (the result of the execution is NOT returned to the calling application, but may be saved in a log file for manual inspection). The file would be saved somewere on disk, and the calling application would run a .bat file to save it to DB. Not very elegant, but allowed us to make it work.
|||We experienced this several years ago. SQL Server seemed to drift off into never-land when attempting about 46,000 insert statements in QA. Our solution was to reduce the script size and all went well. We were only doing this for a temporary data transfer so we never tried to find out the actual problem.
This is probably not real helpful except to let you know that your situation has been reproduced elsewhere.
Steve
|||Hi,
thanks for your replies.
As far as I understand, the "GO" keyword merely instructs Query Analyzer to make a round trip and wait for the statements to complete before continuing with the scripts. In other words, it's for the client, not the server.
I think what we'll have to do in the short term is modify the export logic to make sure each batch doesn't become "too big" - and define too big from trial and error. In the longer term we should probably implement a completely different procedure for this whole import/export functionality. It really would be better IMO if we exported actual data instead of scripts. (For many reasons; one obvious is if you want to support several database engines the exporting side shouldn't need to know what db to generate scripts for.)
Now that I've dug a bit more into this I've discovered we have other problems too, relating to transactions. Again it looks like it's simply the sheer size of the transaction that brings Sql Server to it's knees, but I'm probably better off starting a separate thread for this since the heading becomes a bit misleading.
Thanks, and happy coding :)
Dag
|||Start the profiler and see what SQL string is actually reaching the SQL server, that might answer whether its a length of SQL string issue or syntax or run time error
Saptagiri Tangudu
PNT Marketing Services
Hi,
There is no reason to believe that anything less than the whole string is sent to the server - the question is whether the server can deal with receiving any size string, because it seems to simply go down without even saying goodbye. That is not only is there no response (error indication or otherwise) on the connection executing these huge scripts, but the server just stops responding to *any* requests. One can no longer connect to it with Enterprise Manager or Query Analyzer. It doesn't even recycle itself; you have to do that manually!
I've had it with MS' complete and utter lack of reliability. I shall react with disgust to every new feature that is introduced at the expense of actually making a product that can perhaps do less, but that WORKS. Sql Server is not a proper database.
|||There is indeed a limitation in Query Analyzer.
My script is about 7-8K large, and if i exceed it with one byte, i get an invalid column name error.
If i remove the line (UNION SELECT 20992048) it works again.
Note: the union select is used to opmitimize a huge IN clause, which perform badly on SQL server.
Is there a registry hack or similiar to get around this problem, or is it infact a problem by SQL server?!
|||This is way late, but if anyone else reads this it may be of interest...
>> Back then I had problems with 'Go' statements between inserts
That makes sense, actually. In short "GO" is *not* a command for SQL server - it is for Query Analyzer, the client, and it instructs it to "go", that is to submit the queries preceding the go to the server.
This is useful, but it'd be nice if there was a statement like this that was actually supported by SQL Server; while it works well enough to perform, say, multiple inserts with a batch, separating each query with a semicolon,
INSERT v1, v2, v3 INTO table; INSERT a, b, c INTO table; ...
not all operations will work as a single batch. For example, a CREATE PROCEDURE (or other object type) statement must be the first in a batch. So in Query Analyzer you can write
IF EXISTS (SELECT ... ) DROP TABLE myTable;
GO
CREATE TABLE myTable...
and it'll work just fine. But if you save the script to a file, you cannot just read the entire script and call ExecuteNonQuery - you'd have to split it into batches separated by GO to make it work.
No comments:
Post a Comment