Introduction - If you have any usage issues, please Google them yourself
Massive sub-word combination of the sub-word part of Lucene source code, does not contain a massive segmentation dll, please download the attention of friends, I hope everyone can help.
Packet : 13898364fenci.rar filelist
data\sforeign_u8.txt
data\snotforeign.txt
data\snotname_u8.txt
data\snumbers_u8.txt
data\ssurname_u8.txt
data\tforeign_u8.txt
data\tnotname_u8.txt
data\tnumbers_u8.txt
data\tsurname_u8.txt
lucene\AbstractDictionary.java
lucene\bothlexu8.txt
lucene\ChunkWordIdentifier.java
lucene\data\sforeign_u8.txt
lucene\data\snotforeign.txt
lucene\data\snotname_u8.txt
lucene\data\snumbers_u8.txt
lucene\data\ssurname_u8.txt
lucene\data\tforeign_u8.txt
lucene\data\tnotname_u8.txt
lucene\data\tnumbers_u8.txt
lucene\data\tsurname_u8.txt
lucene\DictionaryFactory.java
lucene\FileWordBase.java
lucene\FixWLengthLexicon.java
lucene\HTML2Text.java
lucene\Lexicon.java
lucene\LexiconIndexEntry.java
lucene\PlainTextAnalyzer.java
lucene\Segmenter.java
lucene\SegmentGBKAnalyzer.java
lucene\SegmentGBKTokenizer.java
lucene\SegmentTokenizer.java
lucene\SimpleDictionary.java
lucene\SimpleGBKAnalyzer.java
lucene\SimpleGBKTokenizer.java
lucene\SimpleWordIdentifier.java
lucene\simplexu8.txt
lucene\tradlexu8.txt
lucene\WordIdentifier.java
lucene\WordNode.java
lucene\XDChineseAnalyzer.java
lucene\XDChineseTokenizer.java
lucene\data
data
lucene