Location:
Search - bag of word
Search list
Description: 特别说明,Chaining channels in .NET Remoting
源码非原创,详细说明见压缩包里的“说明”word文档
-in particular, in Chaining channels.NET Remoting source of non-original, detailed description of the compression bag "note" word document
Platform: |
Size: 114688 |
Author: 王季 |
Hits:
Description: bag of words 图像识别算法,斯坦福大学李飞飞实验室做的demo-bag of words
Platform: |
Size: 32277504 |
Author: 李桃 |
Hits:
Description: bag-of-words by R. Fergus, L. Fei-Fei and A. Torralba
Platform: |
Size: 2554880 |
Author: luwenhao |
Hits:
Description: bag of feature bag of feature bag of feature-bag of feature bag of feature bag of feature bag of feature
Platform: |
Size: 15360 |
Author: 黄泽铧 |
Hits:
Description: 基于Bag of word的图像分类经典文章,非常适合初学者学习-Bag of feature based classification
Platform: |
Size: 763904 |
Author: leexingguo |
Hits:
Description: matlab 视频处理,图像处理,视频播放,图像跟踪-matlab video processing, image processing, video playback, image tracking, etc., etc.
Platform: |
Size: 1024 |
Author: 杨全祥 |
Hits:
Description: 采用bag of word 特征加上颜色特征共同匹配的图像对匹配算法,具有匹配精度高的特点。-Bag of word feature by feature with the color image matching algorithm matches together, with matching high accuracy.
Platform: |
Size: 24794112 |
Author: nedved |
Hits:
Description: 基于bag of visual words 模型的人脸二分类代码,使用plsa等两种模式进行分类-Bag of visual words ,face recognition ,plsa
Platform: |
Size: 359424 |
Author: ryetal |
Hits:
Description: Semantic similarities for bag of word
Platform: |
Size: 21695488 |
Author: tfidf |
Hits:
Description: 图像的特征用到了Dense Sift,通过Bag of Words词袋模型进行描述,当然一般来说是用训练集的来构建词典,因为我们还没有测试集呢。虽然测试集是你拿来测试的,但是实际应用中谁知道测试的图片是啥,所以构建BoW词典我这里也只用训练集。
其实BoW的思想很简单,虽然很多人也问过我,但是只要理解了如何构建词典以及如何将图像映射到词典维上去就行了,面试中也经常问到我这个问题,不知道你们都怎么用生动形象的语言来描述这个问题?
用BoW描述完图像之后,指的是将训练集以及测试集的图像都用BoW模型描述了,就可以用SVM训练分类模型进行分类了。
在这里除了用SVM的RBF核,还自己定义了一种核: histogram intersection kernel,直方图正交核。因为很多论文说这个核好,并且实验结果很显然。能从理论上证明一下么?通过自定义核也可以了解怎么使用自定义核来用SVM进行分类。-Image features used in a Dense Sift, by the Bag of Words bag model to describe the word, of course, the training set is generally used to build the dictionary, because we do not test set. Although the test set is used as the test you, but who knows the practical application of the test image is valid, so I am here to build BoW dictionary only the training set.
In fact, BoW idea is very simple, although many people have asked me, but as long as you understand how to build a dictionary and how to image map to the dictionary D up on the line, and interviews are often asked me this question, do not know you all how to use vivid language to describe this problem?
After complete description of the image with BoW, refers to the training set and test set of images are described with the BoW model, the training of SVM classification model can be classified.
Apart from having to use the RBF kernel SVM, but also their own definition of a nuclear: histogram intersection kernel, histogram
Platform: |
Size: 3585024 |
Author: lipiji |
Hits:
Description: bag of word,含特征提取,词包-bag of word proggramme
Platform: |
Size: 3547136 |
Author: cho |
Hits:
Description: 该论文在知网上付费下载,为2011年9月最新的关于Bag of Wo rds 算法的框架和基本内容,是学习bag of words算法的很好的入门参考。Bag of Words 算法是一种有效的基于语义特征提取与表达的物体识别算法, 算法充分学习文本检索算法的优点, 将图片整理为一系列视觉词汇的集合, 提取物体的语义特征, 实现感兴趣物体的有效检测与识别。-Bag of Word algo rithm is an efficient object r eco gnition alg or ithm based o n semantic features ex traction and
ex pression. It learns the v irt ues o f the text based sear ch alg or ithm to make imag es a r ang o f v isua l w o rds, ex tract the seman
t ic char acter s and carr y out the detectio n and recog nit ion o f inter est ing objects. This paper mainly discusses the frame and
basic content of Bag of Wor ds algo rithm.
Platform: |
Size: 310272 |
Author: Jessicaying |
Hits:
Description: Graphic Detection and Recognition using Bag-of-word method, finding feature vector with K-means classifier, training or testing data with Naive Bayes Classifier or PLCA method.
Platform: |
Size: 32281600 |
Author: Huang Hua |
Hits:
Description: NLP-Reduce is a natural language query interface that allows its users to enter full English questions, sentence fragments, and keywords. It processes queries a s bag of words and only employs a reduced set of natural language processing techniques, such as stemming and synonym expansion. Dependencies between word or phrases in the queries are only identified by the relationships that exit between the elements in a queries knowledge base. This weakness is also its major strength, as it is completely portable and robust to ungrammatical or deficient input. -NLP-Reduce is a natural language query interface that allows its users to enter full English questions, sentence fragments, and keywords. It processes queries a s bag of words and only employs a reduced set of natural language processing techniques, such as stemming and synonym expansion. Dependencies between word or phrases in the queries are only identified by the relationships that exit between the elements in a queries knowledge base. This weakness is also its major strength, as it is completely portable and robust to ungrammatical or deficient input.
Platform: |
Size: 34363392 |
Author: Daniella |
Hits:
Description: bow模型的详细讲解,全英文的,有兴趣的可以先看一下-the bow model explain in detail, all in English, interested can look at
Platform: |
Size: 6714368 |
Author: 郭芙蓉 |
Hits:
Description: the Bag of Visual word ( BoV/BoW/BoF ) representation of an image or video
Platform: |
Size: 2048 |
Author: Mohammad |
Hits:
Description: SIFT等局部特征的词袋模型实现。包括K-means聚类,直方图特征的形成,以及KNN分类。-SIFT local features such as word bag model implementation. Including K-means clustering to form histogram features, and KNN classification.
Platform: |
Size: 26533888 |
Author: 张志智 |
Hits:
Description: LDA是一种文档主题生成模型,也称为一个三层贝叶斯概率模型,包含词、主题和文档三层结构。文档到主题服从Dirichlet分布,主题到词服从多项式分布。
LDA是一种非监督机器学习技术,可以用来识别大规模文档集(document collection)或语料库(corpus)中潜藏的主题信息。它采用了词袋(bag of words)的方法,这种方法将每一篇文档视为一个词频向量,从而将文本信息转化为了易于建模的数字信息。但是词袋方法没有考虑词与词之间的顺序,这简化了问题的复杂性,同时也为模型的改进提供了契机。每一篇文档代表了一些主题所构成的一个概率分布,而每一个主题又代表了很多单词所构成的一个概率分布。
对于语料库中的每篇文档,LDA定义了如下生成过程(generative process):
1. 对每一篇文档,从主题分布中抽取一个主题;
2. 从上述被抽到的主题所对应的单词分布中抽取一个单词;
3. 重复上述过程直至遍历文档中的每一个单词。-LDA is a document theme generation model, also known as a three-tier Bayesian probability model that contains the words, topics and document three-tier structure. Dirichlet distribution of the document to the theme of obedience, the theme to the word obey polynomial distribution.
LDA is an unsupervised machine learning techniques can be used to identify large-scale document set (document collection) or corpus (corpus) of the underlying themes of information. It uses the word bag (bag of words) of the method, which each one document as a word frequency vector, thus the text information into digital information for ease of modeling. However, the method does not consider the order of the words Bag between words, which simplifies the complexity of the problem, but also for the improvement of the model provided an opportunity. Each document represents a probability distribution of some of the topics posed, and each topic and they represent many words constituted a probability distribut
Platform: |
Size: 30720 |
Author: yangling |
Hits:
Description: 将每一张图的特征点采样聚类成图片的视觉单词 即视觉单词,就是对应图片的代表 创建数据库,将每张图片的视觉单词入库,并建立索引-Will feature a map of each sampling point clustered into visual images of words that is visual words, is to represent the corresponding picture of the is created, the visual image of each word storage and indexing
Platform: |
Size: 22851584 |
Author: 耿文浩 |
Hits:
Description: Just like EM of Gaussian Mixture Model, this is the EM algorithm for fitting Bernoulli Mixture Model.
GMM is useful for clustering real value data. However, for binary data (such as bag of word feature) Bernoulli Mixture is more suitable.
Platform: |
Size: 3072 |
Author: lin |
Hits: