Welcome![Sign In][Sign Up]
Location:
Search - url parse

Search list

[VC/MFClarbin-2.6.3

Description: larbin是一种开源的网络爬虫/网络蜘蛛,由法国的年轻人 Sébastien Ailleret独立开发。larbin目的是能够跟踪页面的url进行扩展的抓取,最后为搜索引擎提供广泛的数据来源。   Larbin只是一个爬虫,也就是说larbin只抓取网页,至于如何parse的事情则由用户自己完成。另外,如何存储到数据库以及建立索引的事情 larbin也不提供。 latbin最初的设计也是依据设计简单但是高度可配置性的原则,因此我们可以看到,一个简单的larbin的爬虫可以每天获取500万的网页,实在是非常高效。-larbin is an open source network reptiles/Network spiders, young people from France' s Sébastien Ailleret independent development. larbin purpose is to track the page url to the expansion of the crawl, and finally for the search engine provides a wide range of data sources. Larbin only a reptile, that is to say larbin only crawl web pages, As to how to parse things completed by the users themselves. In addition, how are stored to the database and the establishment of the index does not provide larbin things. latbin the original design is based on the design is simple but highly configurable nature of the principle, so we can see that a simple daily larbin reptiles can obtain five million pages, it is very efficient.
Platform: | Size: 168960 | Author: lindabin | Hits:

[Windows Developxml_validator

Description: 本程序允许进入xml文件的URL,使用新的合法支持,可以在网站上运行-This demo allows the user to load and parse a specified XML document. Then, through a text box the user can navigate and manipulate the tree. This is both a useful example of how to walk the XML tree using the XML Object Model and a useful tool for learning the XML Object Model.
Platform: | Size: 5120 | Author: 王浩 | Hits:

[JSP/JavajavaNetPachong

Description: 本实例介绍如何通过给定的url解析url,并获得url的信息及如何在java中获取网页源代码-This example describes how a given url parse url, and get url of information and how to get web page source code java
Platform: | Size: 78848 | Author: 徐风 | Hits:

[Search Enginelarbin-2.6.3

Description: larbin是一种开源的网络爬虫/网络蜘蛛,由法国的年轻人Sébastien Ailleret独立开发。larbin目的是能够跟踪页面的url进行扩展的抓取,最后为搜索引擎提供广泛的数据来源。 Larbin只是一个爬虫,也就是说larbin只抓取网页,至于如何parse的事情则由用户自己完成。另外,如何存储到数据库以及建立索引的事情 larbin也不提供。   latbin最初的设计也是依据设计简单但是高度可配置性的原则,因此我们可以看到,一个简单的larbin的爬虫可以每天获取500万的网页,实在是非常高效。   利用larbin,我们可以轻易的获取/确定单个网站的所有联结,甚至可以镜像一个网站;也可以用它建立url 列表群,例如针对所有的网页进行 url retrive后,进行xml的联结的获取。或者是 mp3,或者定制larbin,可以作为搜索引擎的信息的来源。-no
Platform: | Size: 167936 | Author: zfnh | Hits:

[ApplicationsBPARSER

Description: BParser can be used to parse any url in VB
Platform: | Size: 12288 | Author: SC | Hits:

[Web ServerWebServer2

Description: 本人亲自调试可以,不懂问我 WEB服务器VC源码,另附HTTP协议详解.pdf 可以解析HTML网页,URL可以带参数。 更多资料参考http://www.u9txt.com/-I personally can debug, asked me if I do not know VC source WEB server, attached Detailed HTTP protocol. Pdf can parse HTML pages, URL can be parameterized. For more information refer to http://www.u9txt.com/
Platform: | Size: 230400 | Author: DJ | Hits:

[JSP/Javaurl-compiler

Description: help you parse command parameters
Platform: | Size: 2048 | Author: zin | Hits:

[Browser Clientemu-parse

Description: Parser for http://emu-land.net that downloads all games in specified category. USAGE: # emu-parse.py portable gba ROMs are saved in `emu-parse-data` directory. ATTENTION: not all of categories accessible now, so you should modify source (add URL tokens to g_categories dict)
Platform: | Size: 2048 | Author: oxfn | Hits:

[JSP/JavaURLTrans_Do

Description: 转换 URL为 主机地址 加 文件名称,解析URL地址 ,方便获取网络资源-Convert the URL and file name for the host address to parse the URL address to facilitate access to network resources
Platform: | Size: 1024 | Author: 王博 | Hits:

[LabViewcnekk

Description: jsoup 是一款 Java 的HTML 解析器,可直接解析某个URL地址、HTML文本内容。它提供了一套非常省力的API,可通过DOM,CSS以及类似于JQuery的操作方法来取出和操作数据。-jsoup is a Java HTML parser can parse a URL address directly the HTML text content. It provides a very effort API via the DOM and CSS, and similar JQuery operation method to remove and manipulate data
Platform: | Size: 49152 | Author: Law | Hits:

[JSP/Javajsoup-1.7.1-sources.jar

Description: jsoup 是一款 Java 的HTML 解析器,可直接解析某个URL地址、HTML文本内容。它提供了一套非常省力的API,可通过DOM,CSS以及类似于JQuery的操作方法来取出和操作数据。 jsoup的主要功能如下: 从一个URL,文件或字符串中解析HTML; 使用DOM或CSS选择器来查找、取出数据; 可操作HTML元素、属性、文本; jsoup是基于MIT协议发布的,可放心使用于商业项目。-jsoup is a Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods. jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do. scrape and parse HTML from a URL, file, or string find and extract data, using DOM traversal or CSS selectors manipulate the HTML elements, attributes, and text clean user-submitted content against a safe white-list, to prevent XSS attacks output tidy HTML jsoup is designed to deal with all varieties of HTML found in the wild from pristine and validating, to invalid tag-soup jsoup will create a sensible parse tree.
Platform: | Size: 114688 | Author: 白水 | Hits:

[Windows DevelophttpService_ScriptEngine

Description: http服务工程,且自带脚本解析引擎,使用IActiveScriptParse interface来解析脚本-http service engineering, and comes with the script parsing engine can parse the the customers URL access script, Excellent
Platform: | Size: 123904 | Author: hh | Hits:

[Crack HacktranscodinCode

Description: 汉字转码工具,加密汉字十六进制(unicode码、\u开头、&#x开头)十进制(&#开头) url转码; 解析( u开头、\u开头、&#x开头) url解码-Chinese characters transcoding tools, encryption Chinese character hexadecimal (unicode code \ u at the beginning url transcoding & # x at the beginning) decimal (& # at the beginning) parse ( u at the beginning of \ u at the beginning, & # x at the beginning) url decoding
Platform: | Size: 3072 | Author: 孙晶晶 | Hits:

[WEB Codenetduo

Description: 虚拟主机多网站域名绑定程序 1.0 功能: 本程序可以根据不同URL转向相应目录,并且在URL中隐藏目录名称。 本程序可以根绝不同URL显示相应的网站标题、网站关键字、网站描述。 使用方法: 第一步:在虚拟主机中绑定网站域名,如abc1.com和abc2.com 第二步:解析abc1.com和abc2.com到同一主机IP,如1.1.1.1。 第三步:下载本站程序,修改config.asp文件,在config.asp文件中把站点1的域名设置为abc1.com,站点2的域名设置为abc2.com。 第四步:登陆FTP,上传本站提供的程序到根目录,并且建立web1目录和web2目录。分别上传两个网站程序到web1和web2目录即可。 -Virtual host multi-domain binding procedure 1.0: The program can be shifted according to different URL to the appropriate directory, and the to hide directory name in the URL. The program can eradicate a different URL to display the site title, site keywords, site description. Use: Step 1: In the virtual host binding domain, as abc1.com and abc2.com second step: parse abc1.com, and abc2.com to the same host IP 1.1.1.1. The third step: the download site procedures, modify config.asp file, in config.asp file the domain name of the site 1 is set to abc1.com, the site domain set abc2.com. Step 4: landing FTP, upload site program to the root directory and create a the web1 directory and web2 directory. Upload two sites to the directory can web1 and web2.
Platform: | Size: 643072 | Author: DGADG088 | Hits:

[Internet-Networkyouku

Description: 解析youku的文件下载地址。虽然现在算法有些变化,但此程序的思路仍值得学习。-a php tool to parse video url in youku.com
Platform: | Size: 1024 | Author: 王大刷 | Hits:

[androidAsynctask

Description: android开发中异步操作,输入一个网址,用异步操作对该网页进行解析,对学习异步操作的函数以及架构有一定的帮助。-The asynchronous operation of android development, enter a URL, using asynchronous operations to parse the pages, it is helpful to learn the functions and structure of the asynchronous operation .
Platform: | Size: 1328128 | Author: 吴成斌 | Hits:

[JSP/Javajsoup_xici1.7.3

Description: 加入对代理服务器的支持,不考虑账号验证的问题。重载了方法 connect() 调用方法: Jsoup.connect(String url,String proxyhost,int proxyport) org.jsoup.Connection interface Request 添加抽象方法getProxyhost,getProxyport,setProxyhost,setProxyport org.jsoup.helper.HttpConnection inner Class Request 添加方法的实现getProxyhost,getProxyport,setProxyhost,setProxyport org.jsoup.Connection 添加抽象方法proxy(String,int) org.jsoup.helper.HttpConnection 加入方法proxy(String host,int port) org.jsoup.Jsoup 添加方法public static Connection connect(String url,String proxyhost,int proxyport) 和方法public static Document parse(URL url, int timeoutMillis,String proxyhost,int proxyport) -Adding support for proxy server does not consider the problem of the account verification. Overloaded methods connect () Call the method: Jsoup.connect (String url, String proxyhost, int proxyport) org.jsoup.Connection interface Request Add abstract methods getProxyhost, getProxyport, setProxyhost, setProxyport org.jsoup.helper.HttpConnection inner Class Request Add methods to achieve getProxyhost, getProxyport, setProxyhost, setProxyport org.jsoup.Connection Add abstract method proxy (String, int) org.jsoup.helper.HttpConnection Add Method proxy (String host, int port) org.jsoup.Jsoup Add a method public static Connection connect (String url, String proxyhost, int proxyport) And methods public static Document parse (URL url, int timeoutMillis, String proxyhost, int proxyport)
Platform: | Size: 680960 | Author: 流水 | Hits:

[Internet-Networkcrawler-master

Description: C++网络爬虫,应用线程池,解析Url,并存储网页。-C++ web crawler application thread pool, parse Url, and store pages.
Platform: | Size: 10240 | Author: zhou | Hits:

[WEB CodeKingCMS_2009

Description: 简单灵活的无限层级模板标签: 打破ASP版多层嵌套的瓶颈,PHP版模板标签任意无限层的进行嵌套,并传递值和获得URL参数和POST值;调用语言包内容;支持PHP直接在模板中编写PHP代码,并互相独立运行;无需记忆标签,可以用参数标签及时显示可用标签。 高效的模板解析引擎: 采用按需解析方式,仅对模板中存在的标签进行解析,并支持缓存比较大或相同内容的模板标签,以进一步提高页面生成或显示速度。-The infinite hierarchy of simple and flexible template tags: To break the bottleneck of multilayer nested version of ASP, PHP version of the template tag arbitrary infinite layer nested, and passing by value, and get URL parameters and post value language to transfer the contents of the package support PHP PHP code code directly in the template, and are independent of each other operation no need memory tag, tag parameters timely display tags that are available. Efficient template resolution engine: According to the need to parse, only the template in the presence of tags to parse, and to support the larger or the same content of the cache template tags, to further improve the speed of page ged4o5 neration or display.
Platform: | Size: 498688 | Author: 陈一帆 | Hits:

[Web Serveranonym_1.0

Description: 有很多网站不想把来源地址透露给目标网址,比如网赚任务、私密分享社区、破解论坛之类,不用把网址解析关闭,直接使用匿名网址跳转即可,目标网址也将无法统计到来源网站的信息。-There are many sites do not want to disclose the source address to the target site, such Wangzhuan task, private sharing community, like crack forum, do not have to parse the URL is closed, you can jump directly to the use of anonymous URL, destination URLs will not be able to source website statistics information.
Platform: | Size: 28672 | Author: hikack | Hits:
« 12 »

CodeBus www.codebus.net