<<上篇 | 首页 | 下篇>>

ElasticSearch: Java API | Javalobby

ElasticSearch Java API 官方文档:

http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/index.html

ElasticSearch提供了Java API,因此可通过使用Client对象异步执行所有操作。Client对象可以批量执行操作,累计。 Java的API可以在内部使用,以执行所有的API在ElasticSearch。

在本教程中,我们将考虑如何开展将Java API一些操作用在一个独立的Java应用程序里,类似于那些我们在上一篇文章中做的。

Dependency

ElasticSearch托管于Maven的中央。在你的Maven项目,你可以定义你想要的,如下图所示在你的pom.xml文件使用哪个版本ElasticSearch:

 

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch</artifactId>
    <version>0.90.3</version>
</dependency>

 

 

Client

使用Client,您可以执行在正在运行的群集管理任务,如一些操作,如标准索引,get,删除和搜索操作,在现有的集群中。另外,还可以启动所有节点。


获得ElasticSearchclient的最常见的方式是建立它就像在一个群集中的节点的嵌入式节点以及请求客户端从该嵌入节点。


另一种方式获得客户端是创建TransportClient(它从远程连接集群通过使用transport组件),其连接到群集的另一种方式。


应该考虑使用客户端和集群的相同版本,当使用Java API时。客户端和群集版本之间的差异可能会导致某些不兼容性。

 

Node Client

The simplest way of getting a client instance is the node based client. 

 

Node node  = nodeBuilder().node();
Client client = node.client();

 

When a node is started, it joins to the "elasticsearch" cluster. You can create different clusters using the cluster.name setting or clusterName method in the/src/main/resources directory and in elasticsearch.yml file in your project:

 

cluster.name: yourclustername

 

Or in Java code:

 

Node node = nodeBuilder().clusterName("yourclustername").node();
Client client = node.client();

 

Creating Index

 

The Index API allows you to type a JSON document into a specific index, and makes it searchable. There are different ways of generating JSON documents. Here we used map, which represents JSON structure very well. 

 

public static Map<String, Object> putJsonDocument(String title, String content, Date postDate, 
                                                      String[] tags, String author){
        
        Map<String, Object> jsonDocument = new HashMap<String, Object>();
        
        jsonDocument.put("title", title);
        jsonDocument.put("conten", content);
        jsonDocument.put("postDate", postDate);
        jsonDocument.put("tags", tags);
        jsonDocument.put("author", author);
        
        return jsonDocument;
    }

 

Node node    = nodeBuilder().node();
Client client   = node.client();
        
client.prepareIndex("kodcucom", "article", "1")
              .setSource(putJsonDocument("ElasticSearch: Java API",
                                         "ElasticSearch provides the Java API, all operations "
                                         + "can be executed asynchronously using a client object.",
                                         new Date(),
                                         new String[]{"elasticsearch"},
                                         "Hüseyin Akdoğan")).execute().actionGet();
                
        node.close();

 

With the above code, we generate an index by the name of kodcucom and a type by the name of article with standard settings and a record (we don’t have to give an ID here) whose ID value of 1 is stored to ElasticSearch. 

 

Getting Document

The Get API allows you to get a typed JSON document, based on the ID, from the index. 

 

GetResponse getResponse = client.prepareGet("kodcucom", "article", "1").execute().actionGet();

Map<String, Object> source = getResponse.getSource();
        
System.out.println("------------------------------");
System.out.println("Index: " + getResponse.getIndex());
System.out.println("Type: " + getResponse.getType());
System.out.println("Id: " + getResponse.getId());
System.out.println("Version: " + getResponse.getVersion());
System.out.println(source);
System.out.println("------------------------------");

 

Search

The Search API allows you to execute a search query and get the matched results. The query can be executed across more than one indices and types. The query can be provided by using query Java API or filter Java API. Below you can see an example whose body of search request is built by using SearchSourceBuilder. 

 

public static void searchDocument(Client client, String index, String type,
                                      String field, String value){
        
        SearchResponse response = client.prepareSearch(index)
                                        .setTypes(type)
                                        .setSearchType(SearchType.QUERY_AND_FETCH)
                                        .setQuery(fieldQuery(field, value))
                                        .setFrom(0).setSize(60).setExplain(true)
                                        .execute()
                                        .actionGet();
        
        SearchHit[] results = response.getHits().getHits();
        
        System.out.println("Current results: " + results.length);
        for (SearchHit hit : results) {
            System.out.println("------------------------------");
            Map<String,Object> result = hit.getSource();   
            System.out.println(result);
        }
        
    }

 

searchDocument(client, "kodcucom", "article", "title", "ElasticSearch");

 

Updating

Below you can see an example of a field update.

 

public static void updateDocument(Client client, String index, String type, 
                                      String id, String field, String newValue){
        
        Map<String, Object> updateObject = new HashMap<String, Object>();
        updateObject.put(field, newValue);
        
        client.prepareUpdate(index, type, id)
              .setScript("ctx._source." + field + "=" + field)
              .setScriptParams(updateObject).execute().actionGet();
    }

 

updateDocument(client, "kodcucom", "article", "1", "tags", "big-data");

 

Deleting

The delete API allows you to delete a document whose ID value is specified. You can see below an example of deleting a document whose index, type and value is specified. 

 

public static void deleteDocument(Client client, String index, String type, String id){
        
        DeleteResponse response = client.prepareDelete(index, type, id).execute().actionGet();
        System.out.println("Information on the deleted document:");
        System.out.println("Index: " + response.getIndex());
        System.out.println("Type: " + response.getType());
        System.out.println("Id: " + response.getId());
        System.out.println("Version: " + response.getVersion());
    }

 

deleteDocument(client, "kodcucom", "article", "1");

See the sample application here.

 

Adding mapping to a type from Java (索引库中索引的字段名及其数据类型进行定义,包括是否存储、分词)

 

 
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
 
import java.io.IOException;
 
import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequestBuilder;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse;
import org.elasticsearch.action.get.GetRequestBuilder;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequestBuilder;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
 
public class MyTestClass {
 
    private static final String ID_NOT_FOUND = "<ID NOT FOUND>";
 
    private static Client getClient() {
        final ImmutableSettings.Builder settings = ImmutableSettings.settingsBuilder();
        TransportClient transportClient = new TransportClient(settings);
        transportClient = transportClient.addTransportAddress(new InetSocketTransportAddress("localhost", 9300))
        .addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
        return transportClient;
    }
 
    public static void main(final String[] args) throws IOException, InterruptedException {
 
        final Client client = getClient();
        // Create Index and set settings and mappings
        final String indexName = "test";
        final String documentType = "tweet";
        final String documentId = "1";
        final String fieldName = "title";
        final String value = "bar";
 
        final IndicesExistsResponse res = client.admin().indices().prepareExists(indexName).execute().actionGet();
        if (res.isExists()) {
            final DeleteIndexRequestBuilder delIdx = client.admin().indices().prepareDelete(indexName);
            delIdx.execute().actionGet();
        }
 
        final CreateIndexRequestBuilder createIndexRequestBuilder = client.admin().indices().prepareCreate(indexName);
 
        // MAPPING GOES HERE
 
//        final XContentBuilder mappingBuilder = jsonBuilder().startObject().startObject(documentType)
//                .startObject("_ttl").field("enabled", "true").field("default", "1s").endObject().endObject()
//                .endObject();
        XContentBuilder mapping = jsonBuilder()  
              .startObject()  
                .startObject(documentType)  
                .startObject("properties")         
                  .startObject("title").field("type", "string").field("store", "yes").endObject()    
                  .startObject("description").field("type", "string").field("index", "not_analyzed").endObject()  
                  .startObject("price").field("type", "double").endObject()  
                  .startObject("onSale").field("type", "boolean").endObject()  
                  .startObject("type").field("type", "integer").endObject()  
                  .startObject("createDate").field("type", "date").endObject()                 
                .endObject()  
               .endObject()  
             .endObject();          
        System.out.println(mapping.string());
        createIndexRequestBuilder.addMapping(documentType, mapping);
 
        // MAPPING DONE
        createIndexRequestBuilder.execute().actionGet();
 
        // Add documents
        final IndexRequestBuilder indexRequestBuilder = client.prepareIndex(indexName, documentType, documentId);
        // build json object
        final XContentBuilder contentBuilder = jsonBuilder().startObject().prettyPrint();
        contentBuilder.field(fieldName, value);
 
        indexRequestBuilder.setSource(contentBuilder);
        indexRequestBuilder.execute().actionGet();
 
        // Get document
        System.out.println(getValue(client, indexName, documentType, documentId, fieldName));
 
        int idx = 0;
        while (true) {
            Thread.sleep(10000L);
            idx++;
            System.out.println(idx * 10 + " seconds passed");
            final String name = getValue(client, indexName, documentType, documentId, fieldName);
            if (ID_NOT_FOUND.equals(name)) {
                break;
            } else {
                // Try again
                System.out.println(name);
            }
        }
        System.out.println("Document was garbage collected");
    }
 
    protected static String getValue(final Client client, final String indexName, final String documentType,
            final String documentId, final String fieldName) {
        final GetRequestBuilder getRequestBuilder = client.prepareGet(indexName, documentType, documentId);
        getRequestBuilder.setFields(new String[] { fieldName });
        final GetResponse response2 = getRequestBuilder.execute().actionGet();
        if (response2.isExists()) {
            final String name = response2.getField(fieldName).getValue().toString();
            return name;
        } else {
            return ID_NOT_FOUND;
        }
    }
 
}

 

阅读全文……

标签 : , ,

Lucene 4.4 以后近实时NRT检索

Lucene4.4之后,NRTManager 及NRTManagerReopenThread 已经都没有了,如果做近实时搜索的话,就要这么做,

初始化:

   Directory directory = new RAMDirectory();
   IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_48, new StandardAnalyzer(ver));
   IndexWriter indexWriter = new IndexWriter(directory, iwc);
   TrackingIndexWriter trackWriter = new TrackingIndexWriter(indexWriter);
   searcherManager = new SearcherManager(indexWriter, true, new SearcherFactory());
   
   ControlledRealTimeReopenThread<IndexSearcher> CRTReopenThread = 
     new ControlledRealTimeReopenThread<IndexSearcher>(trackWriter, searcherManager, 5.0, 0.025) ;

   CRTReopenThread.setDaemon(true);
   CRTReopenThread.setName("后台刷新服务");
   CRTReopenThread.start();

添加文档:

trackWriter.addDocument(doc);

进行搜索:

IndexSearcher searcher = searcherManager.acquire();

......

searcherManager.release(searcher);

 
 
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
 
import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TrackingIndexWriter;
import org.apache.lucene.search.ControlledRealTimeReopenThread;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ReferenceManager;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.SearcherManager;
import org.apache.lucene.search.Sort;
import org.apache.lucene.search.SortField;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.util.Version;
 
 
public class LuceneIndex {
private final static Logger log = LogManager.getLogger(LuceneIndex.class);
    private final IndexWriter _indexWriter;
    private final TrackingIndexWriter _trackingIndexWriter;
    private final ReferenceManager<IndexSearcher> _indexSearcherReferenceManager;
    private final ControlledRealTimeReopenThread<IndexSearcher> _indexSearcherReopenThread;
 
 
    private long _reopenToken;      // index update/delete methods returned token
 
////////// CONSTRUCTOR & FINALIZE
    /**
     * Constructor based on an instance of the type responsible of the lucene index persistence
     */
    
    public LuceneIndex(final Directory luceneDirectory,
                       final Analyzer analyzer) {
        try {
            // [1]: Create the indexWriter
            _indexWriter = new IndexWriter(luceneDirectory,
                                           new IndexWriterConfig(Version.LATEST,
                                                                 analyzer));
 
            // [2a]: Create the TrackingIndexWriter to track changes to the delegated previously created IndexWriter 
            _trackingIndexWriter = new TrackingIndexWriter(_indexWriter);
 
            // [2b]: Create an IndexSearcher ReferenceManager to safelly share IndexSearcher instances across
            //       multiple threads
            _indexSearcherReferenceManager = new SearcherManager(_indexWriter,
                                                                 true,
                                                                 null);
 
            // [3]: Create the ControlledRealTimeReopenThread that reopens the index periodically having into 
            //      account the changes made to the index and tracked by the TrackingIndexWriter instance
            //      The index is refreshed every 60sc when nobody is waiting 
            //      and every 100 millis whenever is someone waiting (see search method)
            //      (see http://lucene.apache.org/core/4_3_0/core/org/apache/lucene/search/NRTManagerReopenThread.html)
            _indexSearcherReopenThread = new ControlledRealTimeReopenThread<IndexSearcher>(_trackingIndexWriter,
                                                                                           _indexSearcherReferenceManager,
                                                                                           60.00,   // when there is nobody waiting
                                                                                           0.1);    // when there is someone waiting
            _indexSearcherReopenThread.start(); // start the refresher thread
        } catch (IOException ioEx) {
            throw new IllegalStateException("Lucene index could not be created: " + ioEx.getMessage());
        }
    }
    @Override
    protected void finalize() throws Throwable {
        this.close();
        super.finalize();
    }
    /**
     * Closes every index
     */
    public void close() {
        try {
            // stop the index reader re-open thread
            _indexSearcherReopenThread.interrupt();
            _indexSearcherReopenThread.close();
 
            // Close the indexWriter, commiting everithing that's pending
            _indexWriter.commit();
            _indexWriter.close();
 
        } catch(IOException ioEx) {
            log.error("Error while closing lucene index: {}",
                                                             ioEx);
        }
    }
////////// INDEX
    /**
     * Index a Lucene document
     * @param doc the document to be indexed
     */
    public void index(final Document doc) { 
        try {
            _reopenToken = _trackingIndexWriter.addDocument(doc);
            log.debug("document indexed in lucene");
        } catch(IOException ioEx) {
            log.error("Error while in Lucene index operation: {}",
                                                                  ioEx);
        } finally {
            try {
                _indexWriter.commit();
            } catch (IOException ioEx) {
                log.error("Error while commiting changes to Lucene index: {}",
                                                                              ioEx);
            }
        }
    }
    /**
     * Updates the index info for a lucene document
     * @param doc the document to be indexed
     */
    public void reIndex(final Term recordIdTerm,
                        final Document doc) {   
        try {
            _reopenToken = _trackingIndexWriter.updateDocument(recordIdTerm, 
                                                               doc);
            log.debug("{} document re-indexed in lucene");
        } catch(IOException ioEx) {
            log.error("Error in lucene re-indexing operation: {}",
                                                                  ioEx);
        } finally {
            try {
                _indexWriter.commit();
            } catch (IOException ioEx) {
                log.error("Error while commiting changes to Lucene index: {}",
                                                                              ioEx);
            }
        }
    }
    /**
     * Unindex a lucene document
     * @param idTerm term used to locate the document to be unindexed
     *               IMPORTANT! the term must filter only the document and only the document
     *                          otherwise all matching docs will be unindexed
     */
    public void unIndex(final Term idTerm) {
        try {
            _reopenToken = _trackingIndexWriter.deleteDocuments(idTerm);
            log.debug("{}={} term matching records un-indexed from lucene");
        } catch(IOException ioEx) {
            log.error("Error in un-index lucene operation: {}",
                                                               ioEx);           
        } finally {
            try {
                _indexWriter.commit(); 
            } catch (IOException ioEx) {
                log.error("Error while commiting changes to Lucene index: {}",
                                                                              ioEx);
            }
        }
    }
    /**
     * Delete all lucene index docs
     */
    public void truncate() {
        try {
            _reopenToken = _trackingIndexWriter.deleteAll();
            log.warn("lucene index truncated");
        } catch(IOException ioEx) {
            log.error("Error truncating lucene index: {}",
                                                          ioEx);            
        } finally {
            try {
                _indexWriter.commit(); 
            } catch (IOException ioEx) {
                log.error("Error truncating lucene index: {}",
                                                              ioEx);
            }
        }
    }
/////// COUNT-SEARCH
    /**
     * Count the number of results returned by a search against the lucene index
     * @param qry the query
     * @return
     */
    public long count(final Query qry) {
        long outCount = 0;
        try {
            _indexSearcherReopenThread.waitForGeneration(_reopenToken);     // wait untill the index is re-opened
            IndexSearcher searcher = _indexSearcherReferenceManager.acquire();
            try {
                TopDocs docs = searcher.search(qry,0);
                if (docs != null) outCount = docs.totalHits;
                log.debug("count-search executed against lucene index returning {}");
            } finally {
                _indexSearcherReferenceManager.release(searcher);
            }
        } catch (IOException ioEx) {
            log.error("Error re-opening the index {}",
                                                      ioEx);
        } catch (InterruptedException intEx) {
            log.error("The index writer periodically re-open thread has stopped",
                                                                                 intEx);
        }
        return outCount;
    }
    /**
     * Executes a search query
     * @param qry the query to be executed
     * @param sortFields the search query criteria
     * @param firstResultItemOrder the order number of the first element to be returned
     * @param numberOfResults number of results to be returnee
     * @return a page of search results
     */
    public LucenePageResults search(final Query qry,Set<SortField> sortFields,
                                    final int firstResultItemOrder,final int numberOfResults) {
        LucenePageResults outDocs = null;
        try {
            _indexSearcherReopenThread.waitForGeneration(_reopenToken); // wait until the index is re-opened for the last update
            IndexSearcher searcher = _indexSearcherReferenceManager.acquire();
            try {
                // sort crieteria
 
                Sort theSort = new Sort();
                // number of results to be returned
                int theNumberOfResults = firstResultItemOrder + numberOfResults;
 
                // Exec the search (if the sort criteria is null, they're not used)
                TopDocs scoredDocs = searcher.search(qry,
                                                                       theNumberOfResults,
                                                                       theSort);
                log.debug("query {} {} executed against lucene index: returned {} total items, {} in this page");
 
 
                ScoreDoc[] hits = scoredDocs.scoreDocs;
 
    List items = new ArrayList();
    for (int i = 0; i < hits.length; i++) {
   
    org.apache.lucene.document.Document doc = searcher.doc(hits[i].doc);
   
 
 
    items.add(doc);
 
    }
   
            } finally {
                _indexSearcherReferenceManager.release(searcher);
            }
        } catch (IOException ioEx) {
            log.error("Error freeing the searcher {}",
                                                      ioEx);
        } catch (InterruptedException intEx) {
            log.error("The index writer periodically re-open thread has stopped",
                                                                                 intEx);
        }
        return outDocs;
    }
/////// INDEX MAINTEINANCE
    /**
     * Mergest the lucene index segments into one
     * (this should NOT be used, only rarely for index mainteinance)
     */
    public void optimize() {
        try {
            _indexWriter.forceMerge(1);
            log.debug("Lucene index merged into one segment");
        } catch (IOException ioEx) {
            log.error("Error optimizing lucene index {}",
                                                         ioEx);
        }
    }
}
标签 : , ,

分布式搜索ElasticSearch构建集群与简单搜索实例应用 - 苏若年 - 博客园

 分布式ElasticSearch集群构建的方法.

 
1.通过在程序中创建一个嵌入es节点(Node),使之成为es集群的一部分,然后通过这个节点来与es集群通信.
 
复制代码
/**在运行该测试实例时,已经在本地建立了对应的索引库datum*/
    public static void main(String[] args) {
        
        //当你启动一个节点,它会自动加入同网段的es集群,一个前提就是es的集群名(cluster.name)这个参数要设置一致。
        String clusterName = "elasticsearch_pudp"; //集群结点名称
        
        /**
         * 默认的话启动一个节点,es集群会自动给它分配一些索引的分片,如果你想这个节点仅仅作为一个客户端而不去保存数据,
         * 你就可以设置把node.data设置成false或 node.client设置成true。
         */
        Node node = NodeBuilder.nodeBuilder().clusterName(clusterName).client(true).node(); 
        
        //启动结点,加入到指定集群
        node.start();
        
        //获取节点搜索端,使用prepareGet搜索datum索引库中 索引类型为datum,的索引记录唯一id值为150得记录
        GetResponse response = node.client().prepareGet("datum", "datum", ""+150).execute().actionGet();
        
        //对象映射模型
        ObjectMapper mapper = new ObjectMapper();
        //将搜索结果response中的值转换成指定的对象模型,Datum是自己建立的一个咨询Model对象
        Datum datum= mapper.convertValue(response.getSource(), Datum.class);
        
        //打印检索结果中获取的对象相应的属性
        System.out.println("资讯标题:"+datum.getTitle() );
        
        //关闭结点
        node.close();
    }
复制代码
程序运行结果:
 
资讯标题:波立维与泰嘉片哪个治疗血栓病效果更好呢
还有一种情况是你并不想把节点加入集群,只想用它进行单元测试时,就要启动一个”本地”的es,这里“本地”指的是在jvm的级别下运行,即两个不同的es节点运行在同一个JVM中时会组成一个集群。它需要把节点的local参数设置成true
 
Node node = NodeBuilder.nodeBuilder().local(true).node(); 
  
 
2.用TransportClient这个接口和es集群通信.
 
集群中绑定结点
 
通过TransportClient这个接口,我们可以不启动节点就可以和es集群进行通信,它需要指定es集群中其中一台或多台机的ip地址和端口
 
        Client client = new TransportClient()
        .addTransportAddress(new InetSocketTransportAddress("192.168.0.149", 9300))
        .addTransportAddress(new InetSocketTransportAddress("192.168.0.162", 9300));
    
        client.close();    
集群名称如果我们不更改,默认的为elasticsearch,在ElasticSearch对应的目录elasticsearch\config\下的elasticsearch.yml文件中.如下位置 
 
################################### Cluster ###################################
 
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elasticsearch
 
 
程序中自定义集群结点名称
 
复制代码
    /**在运行该测试实例时,已经在本地建立了对应的索引库datum*/
    public static void main(String[] args) {
        
        //自定义集群结点名称
        String clusterName = "elasticsearch_pudongping"; 
        
        //程序中更改集群结点名称
        Settings settings = ImmutableSettings.settingsBuilder()
        .put("cluster.name", clusterName).build();
        
        //创建集群,绑定集群内的机器
        TransportClient client = new TransportClient(settings);
        client.addTransportAddress(new InetSocketTransportAddress("192.168.0.149", 9300));
        client.addTransportAddress(new InetSocketTransportAddress("192.168.0.162", 9300));
        
        //搜索
        GetResponse response = client.prepareGet("datum", "datum", ""+130)
          .execute()
          .actionGet();
        
        ObjectMapper mapper = new ObjectMapper();
        Datum datum= mapper.convertValue(response.getSource(), Datum.class);
        
        System.out.println("资讯标题:"+datum.getTitle() );
        
        //关闭结点
        client.close();    
    }
复制代码
程序运行结果:
 
资讯标题:捷诺维主要成份有哪些 疗效怎么样
设置属性使客户端去嗅探整个集群的状态
 
可以设置client.transport.sniff为true来使客户端去嗅探整个集群的状态
 
复制代码
        /**
         * 可以设置client.transport.sniff为true来使客户端去嗅探整个集群的状态,
         * 把集群中其它机器的ip地址加到客户端中,这样做的好处是一般你不用手动设置集群里所有集群的ip到连接客户端,
         * 它会自动帮你添加,并且自动发现新加入集群的机器。
         */
        Settings settings = ImmutableSettings.settingsBuilder()
        .put("client.transport.sniff", true).build();
        TransportClient client = new TransportClient(settings);
复制代码
 
 
实例应用: 
 
使用TransportClient初始化客户端并执行简单搜索:
 
复制代码
package com.bbf.client;
 
import java.util.ArrayList;
import java.util.List;
 
import org.codehaus.jackson.map.ObjectMapper;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import com.bbf.search.model.Datum;
 
/**
 * description:
 *
 * @author <a href='mailto:[email protected]'> Cn.pudp (En.dennisit)</a> Copy Right since 2013-9-29 
 *
 * com.bbf.client.ESClient.java
 *
 */
 
public class ESClient {
 
    
    /**在运行该测试实例时,已经在本地建立了对应的索引库datum*/
    public static void main(String[] args) {
        
        
        //自定义集群结点名称
        String clusterName = "elasticsearch_pudongping"; 
        
        //程序中更改集群结点名称 并且设置client.transport.sniff为true来使客户端去嗅探整个集群的状态
        Settings settings = ImmutableSettings.settingsBuilder()
        .put("cluster.name", clusterName).put("client.transport.sniff", true).build();   
        
        //创建客户端对象
        TransportClient client = new TransportClient(settings);
        
        //客户端对象初始化集群内结点,绑定多个ip
        //client.addTransportAddress(new InetSocketTransportAddress("192.168.0.149", 9300));
        client.addTransportAddress(new InetSocketTransportAddress("192.168.0.162", 9300));
        
        
        //搜索,根据Id查询
        GetResponse response = client.prepareGet("datum", "datum", ""+130)
          .execute()
          .actionGet();
        
        //查询结果映射成对象类
        ObjectMapper mapper = new ObjectMapper();
        Datum datum= mapper.convertValue(response.getSource(), Datum.class);
        
        System.out.println("资讯编号:" + datum.getId() +"\t资讯标题:"+datum.getTitle()  );
        
        //构造查询器查询,第一个参数为要查询的关键字,第二个参数为要检索的索引库中的对应索引类型的域
        QueryBuilder query = QueryBuilders.multiMatchQuery("恩必普", "keyword");  
        //第一个参数datum表示索引库,第二个参数datum表示索引类型,from表示开始的位置 size表示查询的条数 ,类似mysql中的limit3,5
        SearchResponse searchResponse = client.prepareSearch("datum").setTypes("datum").setQuery(query).setFrom(3).setSize(5).execute().actionGet(); 
        
 
        //将搜索结果转换为list集合对象
        List<Datum> lists  = getBeans(searchResponse);
        
        System.out.println("查询出来的结果数:" + lists.size());
        for(Datum dtm: lists){
            System.out.println("资讯编号:" + dtm.getId() +"\t资讯标题:"+dtm.getTitle());
        }
        
        //关闭客户端
        client.close();    
 
    }
    
    /**
     * 从查询到的记录中获取json串值,转换成<code>Datum</code>对象
     *
     * @author <a href='mailto:[email protected]'> Cn.pudp (En.dennisit)</a> Copy Right since 2013-9-24 下午09:24:29
     *                
     * @param response
     *                     查询结果集<code>GetResponse</code>
     * @return
     *                     返回<code>Datum</code>对象
     */
    public static Datum getResponseToObject(GetResponse response){
        ObjectMapper mapper = new ObjectMapper();
        return mapper.convertValue(response.getSource(), Datum.class);
    }
    
    
    /**
     * 将查询到的对象集合封装成List集合
     *
     * @author <a href='mailto:[email protected]'>Cn.pudp(En.dennisit)</a> Copy Right since 2013-9-27 下午02:31:26
     *                
     * @param  response
     * @return
     */
    public static List<Datum> getBeans(SearchResponse response) {
        SearchHits hits = response.getHits();
        ObjectMapper mapper = new ObjectMapper();
        List<Datum> datumList = new ArrayList<Datum>();
        for (SearchHit hit : hits) {  
            String json = hit.getSourceAsString();
            Datum dtm = new Datum();
           
            try {
                dtm = mapper.readValue(json, Datum.class);
                datumList.add(dtm);
            } catch (Exception e) {
                e.printStackTrace();
            }
            
        }
        return datumList;
    }
    
}
复制代码
 程序运行结果:
 
复制代码
资讯编号:130    资讯标题:捷诺维主要成份有哪些 疗效怎么样
查询出来的结果数:5
资讯编号:16    资讯标题:恩必普是不是医保药 可以报销吗
资讯编号:11    资讯标题:恩必普的治疗范围  有什么优势
资讯编号:17    资讯标题:恩必普的作用机制是什么
资讯编号:12    资讯标题:恩必普服用有什么禁忌 注意事项哪些
资讯编号:20    资讯标题:中风可以用恩必普吗
复制代码
 

阅读全文……

标签 : , ,