The only workaround is to set the rows value large enough to return all documents.
The only workaround is to set the rows value large enough to return all documents. However, I wouldn't recommend this for anything larger than about 1000 documents. If the number of documents you are fetching is large enough, you will run into memory or timeout issues with the XML you have to generate and parse.
For example, if there are 2-3 million documents in your index, do you really want all of that in a single response? It's paginated for a reason. You should probably leverage it.
Of secondary concern... Why are you doing this to begin with? What's the point of putting a bunch of data into a search index, if you are just going to pull it ALL out? You may be better off using your original data source at that point.
If you need to get everything out, you can either set the number of rows ridiculously high (as indicated above, with the caveat that, well, it won't work because you'll run out of memory) or iterate through your results using "rows" and "start" Pseudocode: numresults = rows = 100 (or 1000 or 100,000 or whatever you can handle) start = 0 while (start) &rows=&start=&... result = start += rows endwhile See wiki.apache.org/solr/CommonQueryParameters for use of "start" Also remember that when you're grabbing gobs of documents, use the 'fl' parameter to only pull back what you're actually gonna use.
Q=*:*&version=2.2&start=0&rows=10&indent=on&wt=json This will bring all records. No of records you can limit by rows parameter.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.