本文整理汇总了Java中net.paoding.analysis.analyzer.PaodingAnalyzer类的典型用法代码示例。如果您正苦于以下问题:Java PaodingAnalyzer类的具体用法?Java PaodingAnalyzer怎么用?Java PaodingAnalyzer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
PaodingAnalyzer类属于net.paoding.analysis.analyzer包,在下文中一共展示了PaodingAnalyzer类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: searchIndex
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
/**
* 查询索引
*
* @param keywords
* @return
* @throws Exception
*/
public List<Document> searchIndex(Integer typeId, String keywords) throws Exception {
// 1.init searcher
Analyzer analyzer = new PaodingAnalyzer();
IndexReader reader = IndexReader.open(typeId == appConfig.getGameTypeId() ? appConfig.getGameIndexDir()
: appConfig.getSoftIndexDir());
BooleanClause.Occur[] flags = new BooleanClause.Occur[] { BooleanClause.Occur.SHOULD,
BooleanClause.Occur.SHOULD };
Query query = MultiFieldQueryParser.parse(keywords, appConfig.getQueryFields(), flags, analyzer);
query = query.rewrite(reader);
// 2.search
List<Document> docs = new ArrayList<Document>();
Hits hits = (typeId == appConfig.getGameTypeId() ? gameSearcher.search(query, Sort.RELEVANCE) : softSearcher
.search(query, Sort.RELEVANCE));// searcher.search(query,
// Sort.RELEVANCE);
for (int i = 0; i < hits.length(); i++) {
docs.add(hits.doc(i));
}
// 3.return
reader.close();
return docs;
}
开发者ID:zhaoxi1988,项目名称:sjk,代码行数:31,代码来源:SearchServiceImpl.java
示例2: search
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
@RequestMapping("/search")
public ModelAndView search(@RequestParam(value="keyword")String keyword,@RequestParam(value="start")int start,@RequestParam(value="pagesize")int pagesize){
QueryResult<Book> queryResult= null;
try {
keyword=keyword==null?"":keyword.trim();
//keyword=new String(keyword.getBytes("iso-8859-1"),"utf-8");
if(!"".equals(keyword)){
queryResult = bookService.query(keyword, start, pagesize, new PaodingAnalyzer());
}
} catch (Exception e) {
e.printStackTrace();
}
ModelAndView modelAndView = new ModelAndView("list");
modelAndView.addObject("queryResult", queryResult);
return modelAndView;
}
开发者ID:v5developer,项目名称:maven-framework-project,代码行数:17,代码来源:BookController.java
示例3: search
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
@Test
public void search(){
int start=0;
int pagesize=5;
Analyzer analyzer=new PaodingAnalyzer();
String[] field=new String[]{"name","description","authors.name"};
QueryResult<Book> queryResult= null;
try {
queryResult = bookDao.query("实战", start, pagesize, analyzer, field);
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("共检索到["+queryResult.getSearchresultsize()+"]条记录!");
for (Book book : queryResult.getSearchresult()) {
System.out.println("书名:"+book.getName()+"\n描述:"+book.getDescription()+"\n出版日期:"+book.getPublicationDate());
System.out.println("----------------------------------------------------------");
}
}
开发者ID:v5developer,项目名称:maven-framework-project,代码行数:21,代码来源:BookDaoImplTest.java
示例4: search
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
@RequestMapping("/search")
public ModelAndView search(@RequestParam(value="keyword")String keyword,@RequestParam(value="start")int start,@RequestParam(value="pagesize")int pagesize){
QueryResult<Book> queryResult= null;
try {
keyword=keyword==null?"":keyword.trim();
//keyword=new String(keyword.getBytes("iso-8859-1"),"utf-8");
if(!"".equals(keyword)){
queryResult = bookService.query(keyword, start, pagesize, new PaodingAnalyzer());
}
} catch (Exception e) {
e.printStackTrace();
}
ModelAndView modelAndView = new ModelAndView("list");
modelAndView.addObject("queryResult", queryResult);
modelAndView.addObject("keyword", keyword);
return modelAndView;
}
开发者ID:v5developer,项目名称:maven-framework-project,代码行数:18,代码来源:BookController.java
示例5: initDir
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static IndexWriter initDir (Directory dir, boolean create) {
Analyzer analyzer = new PaodingAnalyzer(); // create analyzer
IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_35, analyzer);
if (create) {
// Create a new index in the directory, removing any previously
// indexed documents:
iwc.setOpenMode(OpenMode.CREATE);
} else {
// Add new documents to an existing index:
iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);
}
// TODO: iwc optimization
IndexWriter writer = null;
try {
writer = new IndexWriter(dir, iwc);
writer.commit();
} catch (Exception e) {
logger.error("initial dir error. " + e.getMessage());
}
return writer;
}
开发者ID:lulyon,项目名称:RealTimeIndexer,代码行数:24,代码来源:RealTimeIndex.java
示例6: main
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static void main(String[] args) throws Exception{
ApplicationContext applicationContext=new ClassPathXmlApplicationContext("applicationContext.xml");
SessionFactory sessionFactory = applicationContext.getBean("hibernate4sessionFactory",SessionFactory.class);
FullTextSession fullTextSession = Search.getFullTextSession(sessionFactory.openSession());
//使用Hibernate Search api查询 从多个字段匹配 name、description、authors.name
// QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();
// Query luceneQuery = qb.keyword().onFields("name","description","authors.name").matching("移动互联网").createQuery();
//使用lucene api查询 从多个字段匹配 name、description、authors.name
//使用庖丁分词器
MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36, new String[]{"name","description","authors.name"}, new PaodingAnalyzer());
Query luceneQuery=queryParser.parse("实战");
FullTextQuery fullTextQuery =fullTextSession.createFullTextQuery(luceneQuery, Book.class);
//设置每页显示多少条
fullTextQuery.setMaxResults(5);
//设置当前页
fullTextQuery.setFirstResult(0);
//高亮设置
SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("<b><font color='red'>", "<font/></b>");
QueryScorer queryScorer=new QueryScorer(luceneQuery);
Highlighter highlighter=new Highlighter(formatter, queryScorer);
@SuppressWarnings("unchecked")
List<Book> resultList = fullTextQuery.list();
System.out.println("共查找到["+resultList.size()+"]条记录");
for (Book book : resultList) {
String highlighterString=null;
Analyzer analyzer=new PaodingAnalyzer();
try {
//高亮name
highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());
if(highlighterString!=null){
book.setName(highlighterString);
}
//高亮authors.name
Set<Author> authors = book.getAuthors();
for (Author author : authors) {
highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());
if(highlighterString!=null){
author.setName(highlighterString);
}
}
//高亮description
highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());
if(highlighterString!=null){
book.setDescription(highlighterString);
}
} catch (Exception e) {
}
System.out.println("书名:"+book.getName()+"\n描述:"+book.getDescription()+"\n出版日期:"+book.getPublicationDate());
System.out.println("----------------------------------------------------------");
}
fullTextSession.close();
sessionFactory.close();
}
开发者ID:v5developer,项目名称:maven-framework-project,代码行数:62,代码来源:SearchManager.java
示例7: main
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static void main(String[] args) throws Exception{
ApplicationContext applicationContext=new ClassPathXmlApplicationContext("applicationContext.xml");
EntityManagerFactory entityManagerFactory = applicationContext.getBean("entityManagerFactory",EntityManagerFactory.class);
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(entityManagerFactory.createEntityManager());
//使用Hibernate Search api查询 从多个字段匹配 name、description、authors.name
// QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Book.class ).get();
// Query luceneQuery = qb.keyword().onFields("name","description","authors.name").matching("移动互联网").createQuery();
//使用lucene api查询 从多个字段匹配 name、description、authors.name
//使用庖丁分词器
MultiFieldQueryParser queryParser=new MultiFieldQueryParser(Version.LUCENE_36, new String[]{"name","description","authors.name"}, new PaodingAnalyzer());
Query luceneQuery=queryParser.parse("实战");
FullTextQuery fullTextQuery =fullTextEntityManager.createFullTextQuery(luceneQuery, Book.class);
//设置每页显示多少条
fullTextQuery.setMaxResults(5);
//设置当前页
fullTextQuery.setFirstResult(0);
//高亮设置
SimpleHTMLFormatter formatter=new SimpleHTMLFormatter("<b><font color='red'>", "<font/></b>");
QueryScorer queryScorer=new QueryScorer(luceneQuery);
Highlighter highlighter=new Highlighter(formatter, queryScorer);
@SuppressWarnings("unchecked")
List<Book> resultList = fullTextQuery.getResultList();
for (Book book : resultList) {
String highlighterString=null;
Analyzer analyzer=new PaodingAnalyzer();
try {
//高亮name
highlighterString=highlighter.getBestFragment(analyzer, "name", book.getName());
if(highlighterString!=null){
book.setName(highlighterString);
}
//高亮authors.name
Set<Author> authors = book.getAuthors();
for (Author author : authors) {
highlighterString=highlighter.getBestFragment(analyzer, "authors.name", author.getName());
if(highlighterString!=null){
author.setName(highlighterString);
}
}
//高亮description
highlighterString=highlighter.getBestFragment(analyzer, "description", book.getDescription());
if(highlighterString!=null){
book.setDescription(highlighterString);
}
} catch (Exception e) {
}
}
fullTextEntityManager.close();
entityManagerFactory.close();
}
开发者ID:v5developer,项目名称:maven-framework-project,代码行数:60,代码来源:SearchManager.java
示例8: main
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
if (args.length != 0) {
QUERY = args[0];
}
// 将庖丁封装成符合Lucene要求的Analyzer规范
Analyzer analyzer = new PaodingAnalyzer();
//读取本类目录下的text.txt文件
String content = ContentReader.readText(English.class);
//接下来是标准的Lucene建立索引和检索的代码
Directory ramDir = new RAMDirectory();
IndexWriter writer = new IndexWriter(ramDir, analyzer);
Document doc = new Document();
Field fd = new Field(FIELD_NAME, content, Field.Store.YES,
Field.Index.TOKENIZED, Field.TermVector.WITH_POSITIONS_OFFSETS);
doc.add(fd);
writer.addDocument(doc);
writer.optimize();
writer.close();
IndexReader reader = IndexReader.open(ramDir);
String queryString = QUERY;
QueryParser parser = new QueryParser(FIELD_NAME, analyzer);
Query query = parser.parse(queryString);
Searcher searcher = new IndexSearcher(ramDir);
query = query.rewrite(reader);
System.out.println("Searching for: " + query.toString(FIELD_NAME));
Hits hits = searcher.search(query);
BoldFormatter formatter = new BoldFormatter();
Highlighter highlighter = new Highlighter(formatter, new QueryScorer(
query));
highlighter.setTextFragmenter(new SimpleFragmenter(50));
for (int i = 0; i < hits.length(); i++) {
String text = hits.doc(i).get(FIELD_NAME);
int maxNumFragmentsRequired = 5;
String fragmentSeparator = "...";
TermPositionVector tpv = (TermPositionVector) reader
.getTermFreqVector(hits.id(i), FIELD_NAME);
TokenStream tokenStream = TokenSources.getTokenStream(tpv);
String result = highlighter.getBestFragments(tokenStream, text,
maxNumFragmentsRequired, fragmentSeparator);
System.out.println("\n" + result);
}
reader.close();
}
开发者ID:no8899,项目名称:paoding-for-lucene-2.4,代码行数:48,代码来源:English.java
示例9: main
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
if (args.length != 0) {
QUERY = args[0];
}
// 将庖丁封装成符合Lucene要求的Analyzer规范
Analyzer analyzer = new PaodingAnalyzer();
//读取本类目录下的text.txt文件
String content = ContentReader.readText(Chinese.class);
//接下来是标准的Lucene建立索引和检索的代码
Directory ramDir = new RAMDirectory();
IndexWriter writer = new IndexWriter(ramDir, analyzer);
Document doc = new Document();
Field fd = new Field(FIELD_NAME, content, Field.Store.YES,
Field.Index.TOKENIZED, Field.TermVector.WITH_POSITIONS_OFFSETS);
doc.add(fd);
writer.addDocument(doc);
writer.optimize();
writer.close();
IndexReader reader = IndexReader.open(ramDir);
String queryString = QUERY;
QueryParser parser = new QueryParser(FIELD_NAME, analyzer);
Query query = parser.parse(queryString);
Searcher searcher = new IndexSearcher(ramDir);
query = query.rewrite(reader);
System.out.println("Searching for: " + query.toString(FIELD_NAME));
Hits hits = searcher.search(query);
BoldFormatter formatter = new BoldFormatter();
Highlighter highlighter = new Highlighter(formatter, new QueryScorer(
query));
highlighter.setTextFragmenter(new SimpleFragmenter(50));
for (int i = 0; i < hits.length(); i++) {
String text = hits.doc(i).get(FIELD_NAME);
int maxNumFragmentsRequired = 5;
String fragmentSeparator = "...";
TermPositionVector tpv = (TermPositionVector) reader
.getTermFreqVector(hits.id(i), FIELD_NAME);
TokenStream tokenStream = TokenSources.getTokenStream(tpv);
String result = highlighter.getBestFragments(tokenStream, text,
maxNumFragmentsRequired, fragmentSeparator);
System.out.println("\n" + result);
}
reader.close();
}
开发者ID:no8899,项目名称:paoding-for-lucene-2.4,代码行数:48,代码来源:Chinese.java
示例10: testIndex
import net.paoding.analysis.analyzer.PaodingAnalyzer; //导入依赖的package包/类
public static void testIndex() throws Exception{
// Files will be indexed.
File fileDir = new File("D:\\luceneweb\\docs");
// Files will store the index of 'fileDir'.
File indexDir = new File("D:\\luceneweb\\index");
// Create Index writer object
Analyzer paodingAnalyzer = new PaodingAnalyzer();
Directory FSDir = FSDirectory.open(indexDir);
IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_35, paodingAnalyzer); // new StandardAnalyzer(Version.LUCENE_35)
IndexWriter indexWriter = new IndexWriter(FSDir, conf);
File[] textFiles = fileDir.listFiles();
long startTime = new Date().getTime();
try {
System.out.println("Indexing to directory '" + indexDir.getName() + "'...");
// Add document to index
for(int i = 0; i < textFiles.length; i++){
if(textFiles[i].isFile() && textFiles[i].getName().endsWith(".txt")){
System.out.println("File " + textFiles[i].getCanonicalPath().substring(18) + " ���ڱ�����...");
String tmp = FileReaderAll(textFiles[i].getCanonicalPath(), "GBK");
Document document = new Document();
System.out.println("--" + (i + 1) + "-- " + textFiles[i].getPath().substring(18));
Field FieldPath = new Field("path", textFiles[i].getPath(),
Field.Store.YES, Field.Index.NO);
// System.out.println("--" + (i + 1) + "-- " + tmp);
Field FieldBody = new Field("body", tmp, Field.Store.YES,
Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS);
// Construct document object
document.add(FieldPath);
document.add(FieldBody);
// Add into the index dir
indexWriter.addDocument(document);
} // if
} // for
// Optimize of the index
indexWriter.optimize();
indexWriter.close();
// Test the time
long endTime = new Date().getTime();
System.out.println("�⻨���� " + (endTime - startTime)
+ " ���������ļ����ӵ�������ȥ! " + fileDir.getPath());
}catch(Exception e) {
e.printStackTrace();
}
}
开发者ID:YinYanfei,项目名称:CadalWorkspace,代码行数:57,代码来源:MainTest.java
注:本文中的net.paoding.analysis.analyzer.PaodingAnalyzer类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论