I have a crawl_dump.log that is rather big 579MB. everything was going great it had 1,480,000 pages indexed and had 200K more... I checked back in the morning after 2 weeks or crawling and it was not running anymore, and all i get when I click the crawling tab is
Run in background
Do not interrupt the script even after closing the browser window until the crawling is complete
There is no choice to run existing script. I have plenty of execution time and memory in php.ini
How can I fix this?
Also, I try to run via SSH and this is what i get...
<title>XML Sitemaps - Generation</title>
<meta http-equiv="Content-type" content="text/html;charset=iso-8859-15" />
<link rel=stylesheet type="text/css" href="pages/style.css">
Thanks in advance