<< 一月 2007 | 首页 | 三月 2007 >>

What is service-oriented architecture? (什么是SOA?)

Service-oriented architecture (SOA) is an evolution of distributed computing based on the request/reply design paradigm for synchronous and asynchronous applications. An application's business logic or individual functions are modularized and presented as services for consumer/client applications. What's key to these services is their loosely coupled nature; i.e., the service interface is independent of the implementation. Application developers or system integrators can build applications by composing one or more services without knowing the services' underlying implementations. For example, a service can be implemented either in .Net or J2EE, and the application consuming the service can be on a different platform or language.

阅读全文……

标签 :

Nuggets of Wisdom from eBay's Architecture (来自eBay架构的至理名言)

An accurate way of knowing what really works is looking at what truly works in practice. The software industry is plagued with so many ideas that for all intents and purposes are purely theoretical. Compounding the problem is the fact the software vendors continue to praise and sell these ideas as best practices.

Massively scalable architectures is one area where not many practitioners have truly been a witness of. Fortunately, sometimes information is graciously released for all to see and hear. I gained a lot of wisdom reading about Google's design of its hardware infrastructure or even Yahoo's page rendering patent. Now, another internet behemoth, eBay, has provided us with some insight on its own architecture.

There are many pieces of information in this presentation, however, I'll try to highlight and comment on the ones that are unusual or interesting.

The impressive part is that eBay had 380M page views a day with a site availability of 99.92%. In addition to that, nearly 30K lines of code changes per week. Just plain and simply enviable, not only that, incontrovertible evidence of the scalability of Java.

Now for the details on how it was achieved using J2EE technologies. The highlights to Ebay's scalability is as follows:

  • Judicious use of server-side state
  • No server affinity
  • Functional server pools
  • Horizontal and vertical database partitioning

What's interesting is how eBay enables data access scalability. They mention the use of "custom O-R mapping" with support for features like caching (local and global), lazy loading, fetch sets (deep and shallow) and support for retrieval and submit update subsets. Furthemore, they use bean managed transaction exclusively, autocommited to the database, and use the O-R mapping to route to different data sources.

A couple of things are quite striking. The first is its complete lack of usage of Entity Beans, using its own O-R mapping solution (Hibernate anyone?). The second is the partitioning of application servers based on use-cases. The third, the partitioning also of databases is also based on use-cases. The last is the stateless nature of the system and the conspicuous absence of clustering technologies.

Here's the quote about server state:

This basically means that right now we are not really using server-side state. We may use it; right now we have not found a good reason to use it. [snip] if there is something that needs to be stateful, then we put in the database; we go back and get it, if we need to. We just take the hit. We do not have to do clustering; we do not have to do any of that stuff.

In short, save yourself the trouble of building stateful servers, furthermore forget about clustering, you simply may not need it. Now, read this about functional partitioning:

So we have a pool or a farm of machines that are dedicated to a specific use case; like search will have its own farm of machines, and we can tune those much differently because the footprint and the replay of those are much different than viewing an item, which is essentially a read-only use case, versus selling an item, which is read-mostly type of use case. [snip] Horizontal database partitioning is something that we have adopted in the last probably four or five years to really get the availability, and also scalability, that we need.

In short, forget about placing your application and database on one giant machine, just use pools of servers that are dedicated on a use case basis. Doesn't that sound awfully similar to Google's strategy?

A little bit more about horizontal partitioning:

What enables our horizontal scalability is content based routing. So, if imagine eBay has on any given day 60 million items. We do not want to store that in one behemoth Sun machine. [snip] let us scale it across; may be, many Sun machines, but how you get to the right one? There is the content-based routing idea that comes in play. So, the idea was that given some hint, find out which of my 20 physical database hosts do I need to go to. The other cool thing about this is that failover could be defined.

Finally a word about using a more loosely coupled architecture in the future:

Using messaging to actually decouple disparate use cases is something that we are investigating.

Isn't it strange that the original presentation was about J2EE Design Patterns? The key scalability ideas are only tangentially related to the Patterns. Yes, eBay does use patterns to structure their code, however, focusing on the patterns misses the entire picture. The key nuggets of wisdom are a stateless design, the use of a flexible and highly tuned OR-mapping layer and the partitioning of servers based on use cases. The design patterns are nice, however don't expect blind application of it to lead to scalability.

In general, the approach that eBay is alluding to (and Google has confirmed) is that architectures that consist of pools or farms of machines dedicated on a use-case basis will provide better scalability and availability as compared to a few behemoth machines. The vendors, of course, are gripped in fear about this conclusion for obvious reasons. Nevertheless, the biggest technical hurdle in deploying a large number of servers is, of course, none other than the need for manageability ;-)

标签 : ,

EJB3 Related Article (有关EJB3的文章)

阅读全文……

标签 :

Java中java.util.zip压缩和解压

java.util.zip压缩和解压应用

阅读全文……

标签 :

JAVA下的GZIP应用

用gzip解压缩文件和序列化对象

阅读全文……

标签 :

Brico:一个构建swing应用的框架

Brico: a framework for building swing apps

阅读全文……

标签 :

Lucene 基础指南

作者:lighter, 江南白衣

    Lucene是apache组织的一个用java实现全文搜索引擎的开源项目。其功能非常的强大,但api其实很简单的,它最主要就是做两件事:建立索引和进行搜索。

1. 建立索引时最重要的几个术语

  • Document:一个要进行索引的单元,相当于数据库的一行纪录,任何想要被索引的数据,都必须转化为Document对象存放。
  • Field:Document中的一个字段,相当于数据库中的Column ,Field是lucene比较多概念一个术语,详细见后。
  • IndexWriter:负责将Document写入索引文件。通常情况下,IndexWriter的构造函数包括了以下3个参数:索引存放的路径,分析器和是否重新创建索引。特别注意的一点,当IndexWriter执行完addDocument方法后,一定要记得调用自身的close方法来关闭它。只有在调用了close方法后,索引器才会将存放在内在中的所有内容写入磁盘并关闭输出流。
  • Analyzer:分析器,主要用于文本分词。常用的有StandardAnalyzer分析器,StopAnalyzer分析器,WhitespaceAnalyzer分析器等。
  • Directory:索引存放的位置。lucene提供了两种索引存放的位置,一种是磁盘,一种是内存。一般情况将索引放在磁盘上;相应地lucene提供了FSDirectory和RAMDirectory两个类。
  • 段:Segment,是Lucene索引文件的最基本的一个单位。Lucene说到底就是不断加入新的Segment,然后按一定的规则算法合并不同的Segment以合成新的Segment。

       lucene建立索引的过程就是将待索引的对象转化为Lucene的Document对象,使用IndexWriter将其写入lucene 自定义格式的索引文件中。

       待索引的对象可以来自文件、数据库等任意途径,用户自行编码遍历目录读取文件或者查询数据库表取得ResultSet,Lucene的API只负责和字符串打交道。

1.1 Field 的解释

从源代码中,可以看出Field 构造函数如下:

Field(String name, byte[] value, Field.Store store)
Field(String name, Reader reader)
Field(String name, Reader reader, Field.TermVector termVector)
Field(String name, String value, Field.Store store, Field.Index index)
Field(String name, String value, Field.Store store, Field.Index index, Field.TermVector termVector)

在Field当中有三个内部类:Field.Index,Field.Store,Field.termVector。其中

  • Field.Index有四个属性,分别是:
    Field.Index.TOKENIZED:分词索引
    Field.Index.UN_TOKENIZED:分词进行索引,如作者名,日期等,Rod Johnson本身为一单词,不再需要分词。
    Field.Index:不进行索引,存放不能被搜索的内容如文档的一些附加属性如文档类型, URL等。
    Field.Index.NO_NORMS:;
  • Field.Store也有三个属性,分别是:
    Field.Store.YES:索引文件本来只存储索引数据, 此设计将原文内容直接也存储在索引文件中,如文档的标题。
    Field.Store.NO:原文不存储在索引文件中,搜索结果命中后,再根据其他附加属性如文件的Path,数据库的主键等,重新连接打开原文,适合原文内容较大的情况。
    Field.Store.COMPRESS 压缩存储;
  • termVector是Lucene 1.4.3新增的它提供一种向量机制来进行模糊查询,很少用。

     上面所说的Field属性与lucene1.4.3版本的有比较大的不同,在旧版的1.4.3里lucene是通过Field.Keyword (...),FieldUnIndexed(...),FieldUnstored(...)和Field.Text(...)来设置不同字段的类型以达到不同的用途,而当前版本由Field.Index和Field.Store两个字段的不同组合来达到上述效果。
还有一点说明,其中的两个构造函数其默认的值为Field.Store.NO和Field.Index.TOKENIZED。:

Field(String name, Reader reader)
Field(String name, Reader reader, Field.TermVector termVector)
  • 限制Field的长度:
    IndexWriter类提供了一个setMaxFieldLength的方法来对Field的长度进行限制,看一下源代码就知道其默认值为10000;我们可以在使用时重新设置此参数。如果使用默认值,那么Lucene就仅仅对文档的前面的10000个term进行索引,超过这一个数的文档就不会被建立索引。

1.2 索引的合并、删除、优化

  • IndexWriter中的addIndexes方法将索引进行合并;当在不同的地方创建了索引后,如果需要将索引合并,这时候使用addIndexes方法就显得很有意义。
  • 可以通过IndexReader类从索引中进行文档的删除。IndexReader是很特别的一个类,看源代码就知道它主要是通过自身的静态方法来完成构造的。示例:
    IndexReader reader = IndexReader.open("C:\\springside");
    reader.deleteDocument(X);                            //这里的X是一个int的常数;不推荐这一种删除方法   
    reader.deleteDocument(new Term("name","springside"));//这是另一种删除索引的方法,按字段来删除,推荐使用这一种做法    reader.close();
  • 优化索引:可以使用IndexWriter类的optimize方法来进行优先,它会将多个Segment进行合并,组成一个新的Segment,可以加快建立索引后搜索的速度。另外需要注意的一点,optimize方法会降低建立索引的速度,而且要求的磁盘空间会增加。

2. 进行搜索时最常用的几个术语

  • IndexSearcher:是lucene中最基本的检索工具,所有的检索都会用到IndexSearcher工具。初始化IndexSearcher需要设置索引存放的路径,让查询器能定位索引而进行搜索。
  • Query:查询,lucene中支持模糊查询,语义查询,短语查询,组合查询等等,如有TermQuery,BooleanQuery,RangeQuery,WildcardQuery等一些类。
  • QueryParser: 是一个解析用户输入的工具,可以通过扫描用户输入的字符串,生成Query对象。
  • Hits:在搜索完成之后,需要把搜索结果返回并显示给用户,只有这样才算是完成搜索的目的。在lucene中,搜索的结果的集合是用Hits类的实例来表示的。Hits对象中主要方法有:
    length(): 返回搜索结果的总数,下面简单的用法中有用到Hit的这一个方法
    doc(int n): 返回第n个文档
    iterator(): 返回一个迭代器

    这里再提一下Hits,这也是Lucene比较精彩的地方,熟悉hibernate的朋友都知道hibernate有一个延迟加载的属性,同样, Lucene也有。Hits对象也是采用延迟加载的方式返回结果的,当要访问某个文档时,Hits对象就在内部对Lucene的索引又进行一次检索,最后才将结果返回到页面显示。

3. 一个简单的实例:

首先把lucene的包放在classpath路径中去,写下面一个简单的类:

public class FSDirectoryTest {
    //建立索引的路径     public static final String path = "c:\\index2";

    public static void main(String[] args) throws Exception {
        Document doc1 = new Document();
        doc1.add( new Field("name", "lighter springside com",Field.Store.YES,Field.Index.TOKENIZED));

        Document doc2 = new Document();
        doc2.add(new Field("name", "lighter blog",Field.Store.YES,Field.Index.TOKENIZED));

        IndexWriter writer = new IndexWriter(FSDirectory.getDirectory(path, true), new StandardAnalyzer(), true);
        writer.addDocument(doc1);
        writer.addDocument(doc2);
        writer.close();

        IndexSearcher searcher = new IndexSearcher(path);
        Hits hits = null;
        Query query = null;
        QueryParser qp = new QueryParser("name",new StandardAnalyzer());

        query = qp.parse("lighter");
        hits = searcher.search(query);
        System.out.println("查找\"lighter\" 共" + hits.length() + "个结果");

        query = qp.parse("springside");
        hits = searcher.search(query);
        System.out.println("查找\"springside\" 共" + hits.length() + "个结果");

    }
}

执行的结果:

查找"lighter" 共2个结果
查找"springside" 共1个结果

4. 一个复杂一点的实例

  • 在windows系统下的的C盘,建一个名叫s的文件夹,在该文件夹里面随便建三个txt文件,随便起名啦,就叫"1.txt","2.txt"和"3.txt"啦
    其中1.txt的内容如下:
    springside社区
    更大进步,吸引更多用户,更多贡献   
    2007年

    而"2.txt"和"3.txt"的内容也可以随便写几写,这里懒写,就复制一个和1.txt文件的内容一样吧

  • 下载lucene包,放在classpath路径中,然后建立索引:
    /**
     * author lighter date 2006-8-7
     */
    public class LuceneExample {
    	public static void main(String[] args) throws Exception {
    		
    		File fileDir = new File("c:\\s");     // 指明要索引文件夹的位置,这里是C盘的S文件夹下  		
                    File indexDir = new File("c:\\index"); // 这里放索引文件的位置 		
                    File[] textFiles = fileDir.listFiles();
    		
    		Analyzer luceneAnalyzer = new StandardAnalyzer();
    		IndexWriter indexWriter = new IndexWriter(indexDir,luceneAnalyzer,true);
    	      indexFile(luceneAnalyzer,indexWriter, textFiles);	
    		indexWriter.optimize();//optimize()方法是对索引进行优化 		indexWriter.close();	
    	}
    	
    	public static void indexFile(Analyzer luceneAnalyzer,IndexWriter indexWriter,File[] textFiles) throws Exception
    	{
    		//增加document到索引去 		for (int i = 0; i < textFiles.length; i++) {
    			if (textFiles[i].isFile() && textFiles[i].getName().endsWith(".txt")) {
    				String temp = FileReaderAll(textFiles[i].getCanonicalPath(),"GBK");
    				Document document = new Document();
    				Field FieldBody = new Field("body", temp, Field.Store.YES,Field.Index.TOKENIZED);
    				document.add(FieldBody);
    				indexWriter.addDocument(document);
    			}
    		}
    	}
    	public static String FileReaderAll(String FileName, String charset)throws IOException {
    		BufferedReader reader = new BufferedReader(new InputStreamReader(
    				new FileInputStream(FileName), charset));
    		String line = "";
    		String temp = "";
    		while ((line = reader.readLine()) != null) {
    			temp += line;
    		}
    		reader.close();
    		return temp;
    	}
    }
  • 执行查询:
    public class TestQuery {   
        public static void main(String[] args) throws IOException, ParseException {   
            Hits hits = null;   
            String queryString = "社区";   
            Query query = null;   
            IndexSearcher searcher = new IndexSearcher("c:\\index");   
      
            Analyzer analyzer = new StandardAnalyzer();   
            try {   
                QueryParser qp = new QueryParser("body", analyzer);   
                query = qp.parse(queryString);   
            } catch (ParseException e) {   
            }   
            if (searcher != null) {   
                hits = searcher.search(query);   
                if (hits.length() > 0) {   
                    System.out.println("找到:" + hits.length() + " 个结果!");   
                }   
            }   
        }   
    }
  • 执行结果:
    找到:3 个结果!

5、Hibernate与lucene的结合使用:

参考这一篇文章,里面讲得很详细
http://wiki.redsaga.com/confluence/display/HART/Hibernate+Lucene+Integration

标签 : ,

Hibernate Users FAQ

Hibernate Common Problems

阅读全文……

标签 : ,