I have a list of website urls that need to be crawled. Each crawl should store the Lucene index in it own file directory. I want all of the websites to be indexed at the same time. Can you provide me with directions on how this can be accomplished? The plug in Arachnode.Plugins.CrawlActions.ManageLuceneDotNetIndexes reads the location of the file location to store the results from the settings field which would be changed on each new instance of the crawl. I don't want to overwrite the settings field but it may be necessary. Any insight will be very helpful.
Yes, change 'Settings' at 'private static void AssignApplicationSettingsForDebug()' - you are correct about how this should be done.
For best service when you require assistance:
I don't have a private static void AssignApplicationSettingsForDebug(). I am creating my own instance of the crawler. I get your point here. For each call I have to overwrite the default value of the lucene folder directory. But in this model you still have one instance of the Crawler. Is it possible to have multiple instances of Crawler<ArachnodeDAO> _crawler running at the same time. Say for instance inside of a loop.