Location:
Search - jieba
Search list
Description: 超级解霸 V2.0 源码,由于当时还是386的机器,都没有加密保护。很难的才弄到手的-STHVCD V2.0 source, as it was then or 386 machines, no encryption protection. Difficult into their hands before the
Platform: |
Size: 643778 |
Author: coolhome |
Hits:
Description: 街霸游戏,游戏很逼真,是一款可玩性非常强的手机游戏-neighborhood games, the game is very realistic, the creator of one of the very strong mobile phone games
Platform: |
Size: 60947 |
Author: asdf |
Hits:
Description: 在手机上运行的街霸哦,还可以哦,用来学习也是不错的啊,大家研究研究
Platform: |
Size: 42682 |
Author: 谢进昌 |
Hits:
Description: j2me 游戏 在手机运行的街头霸王
Platform: |
Size: 61581 |
Author: zzc |
Hits:
Description: 请多指教 这是一个常见的街霸游戏的源码 虽然简单 但是适合初学者
请主明 转载 我也是的
Platform: |
Size: 18615 |
Author: 123 |
Hits:
Description: 超级解霸 V2.0 源码,由于当时还是386的机器,都没有加密保护。很难的才弄到手的-STHVCD V2.0 source, as it was then or 386 machines, no encryption protection. Difficult into their hands before the
Platform: |
Size: 643072 |
Author: |
Hits:
Description: 街霸游戏,游戏很逼真,是一款可玩性非常强的手机游戏-neighborhood games, the game is very realistic, the creator of one of the very strong mobile phone games
Platform: |
Size: 60416 |
Author: asdf |
Hits:
Description: 龙风奇超级数学解霸V1.0 原创 龙风奇工作室-The Super mathematical Jieba V1.0
Platform: |
Size: 262144 |
Author: 沁风 |
Hits:
Description: jieba分词软件,是python下的开源分词软件,里面有使用例子,简单易用-jieba segmentation software, is under the open source python segmentation software, there are examples of the use, easy to use
Platform: |
Size: 4698112 |
Author: gaozhe |
Hits:
Description: jieba 的java分词包,一般都是python的包,这个可用于java的jieba分词(Jieba Java word segmentation package, generally Python package, this can be used for the Java Jieba participle)
Platform: |
Size: 1970176 |
Author: xiexie2011
|
Hits:
Description: jieba分词功能在python中的实现方法(The Method of jieba for word-split in python)
Platform: |
Size: 24576 |
Author: 老顾
|
Hits:
Description: 将句子分成很小的独立词,来提取信息,对照数据字典得到有用的关键信息,进行智能筛选题目或回答问题。(The sentence is divided into very small independent words to extract information, and the data dictionary is used to obtain useful key information.)
Platform: |
Size: 1024 |
Author: 京城阿祖
|
Hits:
Description: 分词出差的武器的呼气和对区华东区希望成为(xwijidwdjdowslkmxkszmwksww)
Platform: |
Size: 1135616 |
Author: nuptxiaoyuan
|
Hits:
Description: jieba 分词,用于诶文档进行分词处理,可以作为文本分析的材料。(jieba fenci,use yin doc wordsplit,and it can use in the case word analist . and
i fuxxk this site)
Platform: |
Size: 430080 |
Author: 先先生 |
Hits:
Description: jieba分词和ansj分词使用的java包(The package used by the ansj participle)
Platform: |
Size: 22866944 |
Author: 123xxoo |
Hits:
Description: MATLAB 结巴分词的工具包,用于很多中文分词的模式识别代码程序,利用已有函数工具包提高工作效率,内有安装说明(MATLAB jieba toolkit, used for many Chinese word segmentation pattern recognition code programs, using existing function toolkits to improve work efficiency, with installation instructions)
Platform: |
Size: 10193920 |
Author: 啦丿啦 |
Hits:
Description: 精确模式,试图将句子最精确地切开,适合文本分析;
全模式,把句子中所有的可以成词的词语都扫描出来, 速度非常快,但是不能解决歧义;
搜索引擎模式,在精确模式的基础上,对长词再次切分,提高召回率,适合用于搜索引擎分词。(Accurate mode, trying to cut the sentence up to the most accurate, suitable for text analysis.
The whole mode can scan all the words that can be used in the sentence, but it can't solve ambiguity.
Search engine mode, on the basis of precise mode, once again segmenting long words and improving recall rate, is suitable for search engine segmentation.)
Platform: |
Size: 8325120 |
Author: 艾尚霄珊资 |
Hits:
Description: jieba 分词,用在Python中,对中文文本进行分词(Jieba participle, used in Python to segment Chinese text;)
Platform: |
Size: 7388160 |
Author: risiding |
Hits:
Description: 用jieba实现自然语言初步处理,包含自定义停用词表,自定义词典,统计词频,以实际例子进行演示。(The initial processing of natural language is realized by using jieba, including custom stop word list, custom dictionary, and count word frequency, which is demonstrated with practical examples.)
Platform: |
Size: 21504 |
Author: 丁咚 |
Hits:
Description: 聊天机器人
原理: 严谨的说叫 ”基于深度学习的开放域生成对话模型“,框架为Keras(Tensorflow的高层包装),方案为主流的RNN(循环神经网络)的变种LSTM(长短期记忆网络)+seq2seq(序列到序列模型),外加算法Attention Mechanism(注意力机制),分词工具为jieba,UI为Tkinter,基于”青云“语料(10万+闲聊对话)训练。
运行环境:python3.6以上,Tensorflow,pandas,numpy,jieba。(Chat Robot Principle: Strictly speaking, it is called "Open Domain Generation Dialogue Model Based on Deep Learning". The framework is Keras (High-level Packaging of Tensorflow). The scheme is LSTM (Long-term and Short-term Memory Network)+seq2seq (Sequence to Sequence Model), plus Attention Mechanism (Attention Mechanism). The word segmentation tool is Jieba and the UI is Tkinter. Based on "Qingyun" corpus (100,000 + chat dialogue) training. Running environment: Python 3.6 or more, Tensorflow, pandas, numpy, jieba.)
Platform: |
Size: 57974784 |
Author: 白子画灬 |
Hits: