Not sure if I need to create a crawl rule or a crawl action...
I am trying to extend with a plugin.. I am looking for pages with specific content and if it exists then I would like to save the specific content to the database but not the store the actual content of the site... I originally did this as a crawl rule however... my performance has dropped really badly. I am using the HTMLAgility pack to look if the criteria exists in the page.
Any suggestions on how I can figure out why I have slowed down so much would be appreciated.
CrawlRules should be used for filtering, and are called once per Discovery, for every Discovery - like, if you find 100 HyperLinks on a page, you will call your CrawlRule 100 times to check to see if the Discovery 'IsDisallowed', 'IsStorable'
CrawlActions are used to 'Do Something'... and are only called once according to the following designations (which you already know, just adding an image for others):
For best service when you require assistance: