Hi,
I was trying to crawl for "http://www." url.
don't ask why, this is a long story. I just recieve an exception and I just wonder if there is a chance that AN will pick such URIs from the net.? if it does, then, this is a bug. if not, you can ignore this.
Hmm... odd that you get the exception there...
I get it at the console. Were you feed the invalid Uri from the DB?
For best service when you require assistance:
Skype: arachnodedotnet
Yes. and add this as a new CrawlRequest :\
Hmm... I did, but I get the exception handled properly.
Maybe this is something to do with the fact I am running the release version (4) and with a debug mode?
if it happens here it must happens with other machine :)
Looking at the your first screenshot a bit closer, I see the error occurring in the internal overload, which means your exception IS coming from an AbsoluteUri in the CrawlRequests table. There ARE CHECK CONSTRAINTS on the table, but your condtion isn't caught. I think this is OK though. I encourage users to NOT use the CrawlRequests table as this table is really for the Cache to use for storage, OR for CrawlRequests submitted through the ArachnodeDAO.
AN will catch the error and report it to the user if this AbsoluteUri is supplied through code, but apparantly won't if you directly manipulate the CrawlRequests table... hmm...
It's probably OK to do nothing on this one.
Veto? Override? Do you think I should modify the DB CHECK CONSTRAINTS to not allow this?
- Mike
This is an old one:)
But as far as I can recall...I didn't populate the URI in the CrawlRequest table manually. I used the code to crawl this URI...:\