Introduction - If you have any usage issues, please Google them yourself
Page crawling software source code, is the original source code, functionality is very wide, that is code confusion, there is no hierarchical design. The basic functions of your crawled pages link- automatically download the page- According to the interception of inbound mode. Special function to identify the next page, automatically capture link, the link for the law can be mass production, import and save the rules, character filtering, automatic warehousing. Are pondering how grasping Grabber with pictures, so come out of the next issue.
Packet : 57578871capturenet_page.rar filelist
indb\app.config
indb\autoLink.cs
indb\autoLink.Designer.cs
indb\autoLink.resx
indb\autoRexLink.cs
indb\autoRexLink.Designer.cs
indb\autoRexLink.resx
indb\ClassDiagram1.cd
indb\downAsp.cs
indb\Download Drive 1.ico
indb\Form1.cs
indb\Form1.Designer.cs
indb\Form1.resx
indb\Form3.cs
indb\Form3.Designer.cs
indb\Form3.resx
indb\frmIni.cs
indb\frmIni.Designer.cs
indb\frmIni.resx
indb\FrmLinkShow.cs
indb\FrmLinkShow.Designer.cs
indb\FrmLinkShow.resx
indb\FrmRegistration.cs
indb\FrmRegistration.Designer.cs
indb\FrmRegistration.resx
indb\InDb.csproj
indb\InDb.csproj.user
indb\insertDB.cs
indb\InterScptStr.cs
indb\logo_title.gif
indb\openDB.cs
indb\Program.cs
indb\recoverFileDir.cs
indb\SearchAndReplace.cs
indb\SearchAndReplace.Designer.cs
indb\SearchAndReplace.resx
indb\Settings.cs
indb