‘编程学习’ 分类下的所有文章
2012十一月15

吐糟:MySql老项目升级改造——解决网页打不开死锁等问题

零八年那会刚搞IT没多久,经验不太丰富,作为项目小组长选用了MySQL数据库,带着几个经验比我还不足的程序猿为Xx政府开发了一套信息发布系统。

至今后悔不已的事情莫过于选用MySql数据库、表结构设计不合理、程序猿水平低下没仔细检查他们的代码等,自零八年上线以后,用户单位从当初的80家左右逐渐增加到940家单位,用户总数近一千。

随着数据量的增长,系统前台经常打不开,而后台用户也录不了文章,部知道从什么时间开始问题越来越严重,自2011年以来出问题就重启tomcat,一直没有好的解决办法。

今年彻底检查了一番代码,居然发现有个程序猿写了一句话,在新增数据的时候把那张最大的表 select * 全部取出放到内存中去了!最气愤的是,他把数据放在内存变量之后,下面整行代码都没有使用到这个变量!(我就想,他是对我有多大仇恨阿,留下了这个隐患)

当然,找到那行最致命的代码并不能彻底解决一系列问题,根据这四年来的经验,整了个解决方案:

1、更换Mysql库为Oracle数据库;

      组织研发了数据抽取工具,支持各类数据源通过配置可以实现数据迁移/转换等功能,将老系统的数据无缝迁移到新系统。

2、优化表结构进行横行/纵向分表;

      纵向分表将大字段从主表中剔除,提高查询分页效率。

3、前后台分离前台使用lucene搜索引擎技术;

      使用lucene,实现定时更新索引库(增量或覆盖更新不影响查询使用)。

      lucene效率很高,前台页面不需要做静态化,对于前台主要是查询的系统静态化反而会降低系统性能。

4、统计定时任务根据发布状态进行更新;

      增加状态表,有文章更新的单位重新统计数据,无更新的单位无需更新。

5、针对oracle数据库优化分页查询语句,只取当前页的数据。

换数据库真是个迫不得已的决定,经过三四个人的1个半月的努力,终于把新系统搞定,上个月新系统上线,现在前台打开是飞速啊。时常孤芳自赏,自己有事没事的就打开来看看,好吧,搞软件的好像都喜欢看自己的产品?。。

关于lucene、apache+tomcat相关文章可以在本站搜一下。当然,这个系统自从用了lucene已经解决了问题,并没有做负载均衡。

 

2012十一月14

Apache 2.2 日志文件access.log过大的处理

修改配置文件,按日期创建日志。

CustomLog “|D:/apache2/bin/rotatelogs.exe D:/apache2/logs/access_%Y_%m_%d.log 86400 480” common

ErrorLog “|D:/apache2/bin/rotatelogs.exe D:/apache2/logs/error_%Y_%m_%d.log 86400 480”

2012十一月8

Apache + Tomcat 负载均衡单机部署实例

一、准备工作

Tomcat6 : http://tomcat.apache.org/download-60.cgi

下载:apache-tomcat-6.0.36.exe

apache httpd server 2.2: http://www.fayea.com/apache-mirror//httpd/binaries/win32/

下载:httpd-2.2.22-win32-x86-no_ssl.msi

apache tomcat connector: http://archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/win32/jk-1.2.31/

下载:mod_jk-1.2.31-httpd-2.2.3.so

二、安装配置

安装路径:

E:\Apache2.2

E:\apache-tomcat-6.0.36-1

E:\apache-tomcat-6.0.36-2

项目路径:

E:\work\demo

1、Apache 配置

修改 E:\Apache2.2\conf\httpd.conf  文件。

1)、加载外部配置文件:

文件最后一行加上

include conf/mod_jk.conf

2)配置项目路径:

<IfModule alias_module>
    Alias /demo "E:/work/demo"
    ScriptAlias /cgi-bin/ "E:/Apache2.2/cgi-bin/"
</IfModule>

3)配置目录权限:

<Directory "E:/work/demo"> 
   Order Deny,Allow 
   Allow from all 
</Directory>

4)配置默认首页:
增加 index.jsp

<IfModule dir_module>
    DirectoryIndex index.jsp index.html
</IfModule>

5)增加 E:\Apache2.2\conf\mod_jk.conf 文件内容:

LoadModule jk_module modules/mod_jk-1.2.31-httpd-2.2.3.so
JkWorkersFile conf/workers.properties
#指定那些请求交给tomcat处理,"controller"为在workers.propertise里指定的负载分配控制器名
JkMount /*.jsp controller

同时将 mod_jk-1.2.31-httpd-2.2.3.so 文件放入 E:\Apache2.2\modules 文件夹下。

6)增加 E:\Apache2.2\conf\workers.properties 文件内容:

#server
worker.list = controller
#========tomcat1========
worker.tomcat1.port=10009
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor = 1
#========tomcat2========
worker.tomcat2.port=11009
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor = 1

#========controller,负载均衡控制器========
worker.controller.type=lb
worker.controller.balanced_workers=tomcat1,tomcat2
worker.controller.sticky_session=false
worker.controller.sticky_session_force=1
#worker.controller.sticky_session=1

2、Tomcat 配置

1)Tomcat-1 配置

E:\apache-tomcat-6.0.36-1\conf\server.xml

需要修改端口的地方:

<Server port="10005" shutdown="SHUTDOWN">
<Connector port="10080" URIEncoding="GBK" protocol="HTTP/1.1" 
connectionTimeout="20000" keepAliveTimeout="15000" maxKeepAliveRequests="1" 
redirectPort="8443" />
<Connector port="10009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1">
   <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

1)Tomcat-2 配置

E:\apache-tomcat-6.0.36-2\conf\server.xml

需要修改端口的地方:

<Server port="11005" shutdown="SHUTDOWN">
<Connector port="11080" URIEncoding="GBK" protocol="HTTP/1.1" 
connectionTimeout="20000" keepAliveTimeout="15000" maxKeepAliveRequests="1" 
redirectPort="8443" />
<Connector port="11009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
   <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

三、项目测试

项目 web.xml 文件需在 <web-app>下增加<distributable/>。

test.jsp

<%@ page contentType="text/html; charset=GBK"%>
<%@ page import="java.util.*"%>
<html>
	<head>
		<title>Cluster App Test</title>
	</head>
	<body>
		Server Info:
		<%
		out.println(request.getLocalAddr() + " : " + request.getLocalPort() + "<br>");
		%>
		<%
			out.println("<br> ID " + session.getId() + "<br>");
			// 如果有新的 Session 属性设置
			String dataName = request.getParameter("dataName");
			if (dataName != null && dataName.length() > 0) {
				String dataValue = request.getParameter("dataValue");
				session.setAttribute(dataName, dataValue);
			}
			out.println("<b>Session 列表</b><br>");
			System.out.println("============================");
			Enumeration e = session.getAttributeNames();
			while (e.hasMoreElements()) {
				String name = (String) e.nextElement();
				String value = session.getAttribute(name).toString();
				out.println(name + " = " + value + "<br>");
				System.out.println(name + " = " + value);
			}
		%>
		<form action="test.jsp" method="POST">
			名称:
			<input type=text size=20 name="dataName">
			<br>
			值:
			<input type=text size=20 name="dataValue">
			<br>
			<input type=submit>
		</form>
	</body>
</html>

先启动Apache2服务,之后依次启动两个tomcat。
分别访问:
http://127.0.0.1:10080/test.jsp
http://127.0.0.1:11080/test.jsp
http://127.0.0.1/test.jsp
接下来测试你懂得,三者 seesion 内容一致即配置成功。

四、注意事项

1、若测试结果不成功,可以查看日志看看报什么错误,是否配置疏忽了什么环节,apache的权限有没有配置等,注意版本;
2、放在session里的对象需要序列化,即类 implements Serializable。

2012八月29

Lucene 3.6.1:中文分词、创建索引库、排序、多字段分页查询以及高亮显示源码

1、准备工作

下载lucene 3.6.1 : http://lucene.apache.org/

下载中文分词IK Analyzer: http://code.google.com/p/ik-analyzer/downloads/list (注意下载的是IK Analyzer 2012_u5_source.zip,其他版本有bug)

下载solr 3.6.1:  http://lucene.apache.org/solr/(编译IK Analyzer时需引用包)

OK,将lucene 、solr 相关包(lucene-core-3.6.1.jar、lucene-highlighter-3.6.1.jar、lucene-analyzers-3.6.1.jar、apache-solr-core-3.6.1.jar、apache-solr-solrj-3.6.1.jar)拷贝到项目lib下,IK源码置于项目src下。

2、从Oracle数据库中取数据创建索引(使用IK分词)

package lucene.util;

import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.Directory;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.util.Version;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.wltea.analyzer.lucene.IKAnalyzer;

import java.sql.Connection;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;

import modules.gk.Gk_info;
import modules.gk.Gk_infoSub;
import web.sys.Globals;
import web.db.DBConnector;
import web.db.ObjectCtl;
import web.util.StringUtil;
//Wizzer.cn
public class LuceneIndex {
    IndexWriter writer = null;
    FSDirectory dir = null;
    boolean create = true;//是否初始化&覆盖索引库

    public void init() {
        long a1 = System.currentTimeMillis();
        System.out.println("[Lucene 开始执行:" + new Date() + "]");
        Connection con = DBConnector.getconecttion(); //取得一个数据库连接
        try {
            final File docDir = new File(Globals.SYS_COM_CONFIG.get("sys.index.path").toString());//E:\lucene
            if (!docDir.exists()) {
                docDir.mkdirs();
            }
            String cr = Globals.SYS_COM_CONFIG.get("sys.index.create").toString();//true or false
            if ("false".equals(cr.toLowerCase())) {
                create = false;
            }
            dir = FSDirectory.open(docDir);
//            Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_36);
            Analyzer analyzer = new IKAnalyzer(true);
            IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_36, analyzer);
            if (create) {
                // Create a new index in the directory, removing any
                // previously indexed documents:
                iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
            } else {
                // Add new documents to an existing index:
                iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
            }
            writer = new IndexWriter(dir, iwc);
            String sql = "SELECT indexno,title,describes,pdate,keywords FROM TABLEA WHERE STATE=1 AND SSTAG<>1 ";
            int rowCount = ObjectCtl.getRowCount(con, sql);
            int pageSize = StringUtil.StringToInt(Globals.SYS_COM_CONFIG.get("sys.index.size").toString());   //每页记录数
            int pages = (rowCount - 1) / pageSize + 1; //计算总页数
            ArrayList list = null;
            Gk_infoSub gk = null;
            for (int i = 1; i < pages+1; i++) {
                long a = System.currentTimeMillis();
                list = ObjectCtl.listPage(con, sql, i, pageSize, new Gk_infoSub());
                for (int j = 0; j < list.size(); j++) {
                    gk = (Gk_infoSub) list.get(j);
                    Document doc = new Document();
                    doc.add(new Field("indexno", StringUtil.null2String(gk.getIndexno()), Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS));//主键不分词
                    doc.add(new Field("title", StringUtil.null2String(gk.getTitle()), Field.Store.YES, Field.Index.ANALYZED));
                    doc.add(new Field("describes", StringUtil.null2String(gk.getDescribes()), Field.Store.YES, Field.Index.ANALYZED));
                    doc.add(new Field("pdate", StringUtil.null2String(gk.getPdate()), Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS));//日期不分词
                    doc.add(new Field("keywords", StringUtil.null2String(gk.getKeywords()), Field.Store.YES, Field.Index.ANALYZED));
                    writer.addDocument(doc);
                    ObjectCtl.executeUpdateBySql(con,"UPDATE TABLEA SET SSTAG=1 WHERE indexno='"+gk.getIndexno()+"'");//更新已索引状态
                }

                long b = System.currentTimeMillis();
                long c = b - a;
                System.out.println("[Lucene " + rowCount + "条," + pages + "页,第" + i + "页花费时间:" + c + "毫秒]");
            }
            writer.commit();

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            DBConnector.freecon(con); //释放数据库连接
            try {
                if (writer != null) {
                    writer.close();
                }
            } catch (CorruptIndexException e) {
                e.printStackTrace();
            } catch (IOException e) {
                e.printStackTrace();
            } finally {
                try {
                    if (dir != null && IndexWriter.isLocked(dir)) {
                        IndexWriter.unlock(dir);//注意解锁
                    }
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
        long b1 = System.currentTimeMillis();
        long c1 = b1 - a1;
        System.out.println("[Lucene 执行完毕,花费时间:" + c1 + "毫秒,完成时间:" + new Date() + "]");
    }
}

 

3、单字段查询以及多字段分页查询高亮显示

 

package lucene.util;

import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.Directory;
import org.apache.lucene.search.*;
import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
import org.apache.lucene.search.highlight.Highlighter;
import org.apache.lucene.search.highlight.SimpleFragmenter;
import org.apache.lucene.search.highlight.QueryScorer;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.queryParser.MultiFieldQueryParser;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.KeywordAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.util.Version;
import modules.gk.Gk_infoSub;

import java.util.ArrayList;
import java.io.File;
import java.io.StringReader;
import java.lang.reflect.Constructor;

import web.util.StringUtil;
import web.sys.Globals;
import org.wltea.analyzer.lucene.IKAnalyzer;
//Wizzer.cn
public class LuceneQuery {
    private static String indexPath;// 索引生成的目录
    private int rowCount;// 记录数
    private int pages;// 总页数
    private int currentPage;// 当前页数
    private int pageSize;   //每页记录数

    public LuceneQuery() {
        this.indexPath = Globals.SYS_COM_CONFIG.get("sys.index.path").toString();
    }

    public int getRowCount() {
        return rowCount;
    }

    public int getPages() {
        return pages;
    }

    public int getPageSize() {
        return pageSize;
    }

    public int getCurrentPage() {
        return currentPage;
    }

    /**
     * 函数功能:根据字段查询索引
     */
    public ArrayList queryIndexTitle(String keyWord, int curpage, int pageSize) {
        ArrayList list = new ArrayList();
        try {
            if (curpage <= 0) {
                curpage = 1;
            }
            if (pageSize <= 0) {
                pageSize = 20;
            }
            this.pageSize = pageSize;   //每页记录数
            this.currentPage = curpage;   //当前页
            int start = (curpage - 1) * pageSize;
            Directory dir = FSDirectory.open(new File(indexPath));
            IndexReader reader = IndexReader.open(dir);
            IndexSearcher searcher = new IndexSearcher(reader);
            Analyzer analyzer = new IKAnalyzer(true);
            QueryParser queryParser = new QueryParser(Version.LUCENE_36, "title", analyzer);
            queryParser.setDefaultOperator(QueryParser.AND_OPERATOR);
            Query query = queryParser.parse(keyWord);
            int hm = start + pageSize;
            TopScoreDocCollector res = TopScoreDocCollector.create(hm, false);
            searcher.search(query, res);

            SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<span style='color:red'>", "</span>");
            Highlighter highlighter = new Highlighter(simpleHTMLFormatter, new QueryScorer(query));
            this.rowCount = res.getTotalHits();
            this.pages = (rowCount - 1) / pageSize + 1; //计算总页数
            TopDocs tds = res.topDocs(start, pageSize);
            ScoreDoc[] sd = tds.scoreDocs;
            for (int i = 0; i < sd.length; i++) {
                Document hitDoc = reader.document(sd[i].doc);
                list.add(createObj(hitDoc, analyzer, highlighter));
            }

        } catch (Exception e) {
            e.printStackTrace();
        }

        return list;

    }
    /**
     * 函数功能:根据字段查询索引
     */
    public ArrayList queryIndexFields(String allkeyword, String onekeyword, String nokeyword, int curpage, int pageSize) {
        ArrayList list = new ArrayList();
        try {
            if (curpage <= 0) {
                curpage = 1;
            }
            if (pageSize <= 0) {
                pageSize = 20;
            }
            this.pageSize = pageSize;   //每页记录数
            this.currentPage = curpage;   //当前页
            int start = (curpage - 1) * pageSize;
            Directory dir = FSDirectory.open(new File(indexPath));
            IndexReader reader = IndexReader.open(dir);
            IndexSearcher searcher = new IndexSearcher(reader);
            BooleanQuery bQuery = new BooleanQuery();  //组合查询
            if (!"".equals(allkeyword)) {//包含全部关键词
                KeywordAnalyzer analyzer = new KeywordAnalyzer();
                BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//AND
                Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, allkeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer);
                bQuery.add(query, BooleanClause.Occur.MUST);  //AND
            }
            if (!"".equals(onekeyword)) { //包含任意关键词
                Analyzer analyzer = new IKAnalyzer(true);
                BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//OR
                Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, onekeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer);
                bQuery.add(query, BooleanClause.Occur.MUST);  //AND
            }
            if (!"".equals(nokeyword)) { //排除关键词
                Analyzer analyzer = new IKAnalyzer(true);
                BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//NOT
                Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, nokeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer);
                bQuery.add(query, BooleanClause.Occur.MUST_NOT);  //AND

            }
            int hm = start + pageSize;
            TopScoreDocCollector res = TopScoreDocCollector.create(hm, false);
            searcher.search(bQuery, res);
            SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<span style='color:red'>", "</span>");
            Highlighter highlighter = new Highlighter(simpleHTMLFormatter, new QueryScorer(bQuery));
            this.rowCount = res.getTotalHits();
            this.pages = (rowCount - 1) / pageSize + 1; //计算总页数
            System.out.println("rowCount:" + rowCount);
            TopDocs tds = res.topDocs(start, pageSize);
            ScoreDoc[] sd = tds.scoreDocs;
            Analyzer analyzer = new IKAnalyzer();
            for (int i = 0; i < sd.length; i++) {
                Document hitDoc = reader.document(sd[i].doc);
                list.add(createObj(hitDoc, analyzer, highlighter));
            }

        } catch (Exception e) {
            e.printStackTrace();
        }

        return list;

    }

    /**
     * 创建返回对象(高亮)
     */

    private synchronized static Object createObj(Document doc, Analyzer analyzer, Highlighter highlighter) {

        Gk_infoSub gk = new Gk_infoSub();
        try {

            if (doc != null) {
                gk.setIndexno(StringUtil.null2String(doc.get("indexno")));
                gk.setPdate(StringUtil.null2String(doc.get("pdate")));
                String title = StringUtil.null2String(doc.get("title"));
                gk.setTitle(title);
                if (!"".equals(title)) {
                    highlighter.setTextFragmenter(new SimpleFragmenter(title.length()));
                    TokenStream tk = analyzer.tokenStream("title", new StringReader(title));
                    String htext = StringUtil.null2String(highlighter.getBestFragment(tk, title));
                    if (!"".equals(htext)) {
                        gk.setTitle(htext);
                    }
                }
                String keywords = StringUtil.null2String(doc.get("keywords"));
                gk.setKeywords(keywords);
                if (!"".equals(keywords)) {
                    highlighter.setTextFragmenter(new SimpleFragmenter(keywords.length()));
                    TokenStream tk = analyzer.tokenStream("keywords", new StringReader(keywords));
                    String htext = StringUtil.null2String(highlighter.getBestFragment(tk, keywords));
                    if (!"".equals(htext)) {
                        gk.setKeywords(htext);
                    }
                }
                String describes = StringUtil.null2String(doc.get("describes"));
                gk.setDescribes(describes);
                if (!"".equals(describes)) {
                    highlighter.setTextFragmenter(new SimpleFragmenter(describes.length()));
                    TokenStream tk = analyzer.tokenStream("keywords", new StringReader(describes));
                    String htext = StringUtil.null2String(highlighter.getBestFragment(tk, describes));
                    if (!"".equals(htext)) {
                        gk.setDescribes(htext);
                    }
                }

            }
            return gk;
        }
        catch (Exception e) {

            e.printStackTrace();
            return null;
        }
        finally {
            gk = null;
        }

    }

    private synchronized static Object createObj(Document doc) {

        Gk_infoSub gk = new Gk_infoSub();
        try {

            if (doc != null) {
                gk.setIndexno(StringUtil.null2String(doc.get("indexno")));
                gk.setPdate(StringUtil.null2String(doc.get("pdate")));
                gk.setTitle(StringUtil.null2String(doc.get("title")));
                gk.setKeywords(StringUtil.null2String(doc.get("keywords")));
                gk.setDescribes(StringUtil.null2String(doc.get("describes")));
            }
            return gk;
        }
        catch (Exception e) {

            e.printStackTrace();
            return null;
        }
        finally {
            gk = null;
        }

    }
}

 

单字段查询:

        long a = System.currentTimeMillis();
        try {
            int curpage = StringUtil.StringToInt(StringUtil.null2String(form.get("curpage")));
            int pagesize = StringUtil.StringToInt(StringUtil.null2String(form.get("pagesize")));
            String title = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("title")));
            LuceneQuery lu = new LuceneQuery();
            form.addResult("list", lu.queryIndexTitle(title, curpage, pagesize));
            form.addResult("curPage", lu.getCurrentPage());
            form.addResult("pageSize", lu.getPageSize());
            form.addResult("rowCount", lu.getRowCount());
            form.addResult("pageCount", lu.getPages());
        } catch (Exception e) {
            e.printStackTrace();
        }
        long b = System.currentTimeMillis();
        long c = b - a;
        System.out.println("[搜索信息花费时间:" + c + "毫秒]");

多字段查询:

        long a = System.currentTimeMillis();
        try {
            int curpage = StringUtil.StringToInt(StringUtil.null2String(form.get("curpage")));
            int pagesize = StringUtil.StringToInt(StringUtil.null2String(form.get("pagesize")));
            String allkeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("allkeyword")));
            String onekeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("onekeyword")));
            String nokeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("nokeyword")));
            LuceneQuery lu = new LuceneQuery();
            form.addResult("list", lu.queryIndexFields(allkeyword,onekeyword,nokeyword, curpage, pagesize));
            form.addResult("curPage", lu.getCurrentPage());
            form.addResult("pageSize", lu.getPageSize());
            form.addResult("rowCount", lu.getRowCount());
            form.addResult("pageCount", lu.getPages());
        } catch (Exception e) {
            e.printStackTrace();
        }
        long b = System.currentTimeMillis();
        long c = b - a;
        System.out.println("[高级检索花费时间:" + c + "毫秒]");

4、Lucene通配符查询

            BooleanQuery bQuery = new BooleanQuery();  //组合查询
            if (!"".equals(title)) {
                WildcardQuery w1 = new WildcardQuery(new Term("title", title+ "*"));

                bQuery.add(w1, BooleanClause.Occur.MUST);  //AND
            }
            int hm = start + pageSize;
            TopScoreDocCollector res = TopScoreDocCollector.create(hm, false);
            searcher.search(bQuery, res);

 

5、Lucene嵌套查询

实现SQL:(unitid like ‘unitid%’  and idml like ‘id2%’) or (tounitid like ‘unitid%’ and tomlid like ‘id2%’ and tostate=1)

                    BooleanQuery bQuery = new BooleanQuery();
                    BooleanQuery b1 = new BooleanQuery();
                    WildcardQuery w1 = new WildcardQuery(new Term("unitid", unitid + "*"));
                    WildcardQuery w2 = new WildcardQuery(new Term("idml", id2 + "*"));
                    b1.add(w1, BooleanClause.Occur.MUST);//AND
                    b1.add(w2, BooleanClause.Occur.MUST);//AND
                    bQuery.add(b1, BooleanClause.Occur.SHOULD);//OR
                    BooleanQuery b2 = new BooleanQuery();
                    WildcardQuery w3 = new WildcardQuery(new Term("tounitid", unitid + "*"));
                    WildcardQuery w4 = new WildcardQuery(new Term("tomlid", id2 + "*"));
                    WildcardQuery w5 = new WildcardQuery(new Term("tostate", "1"));
                    b2.add(w3, BooleanClause.Occur.MUST);//AND
                    b2.add(w4, BooleanClause.Occur.MUST);//AND
                    b2.add(w5, BooleanClause.Occur.MUST);//AND
                    bQuery.add(b2, BooleanClause.Occur.SHOULD);//OR

6、Lucene先根据时间排序后分页

下面这种方式不太合理,建议在创建索引库的时候排序,这样查询的时候只用分页即可,若有多个排序条件可单独创建索引库。
int hm = start + pageSize;
Sort sort = new Sort(new SortField(“pdate”, SortField.STRING, true));
TopScoreDocCollector res = TopScoreDocCollector.create(pageSize, false);
searcher.search(bQuery, res);
this.rowCount = res.getTotalHits();
this.pages = (rowCount – 1) / pageSize + 1; //计算总页数
TopDocs tds =searcher.search(bQuery,rowCount,sort);// res.topDocs(start, pageSize);
ScoreDoc[] sd = tds.scoreDocs;
System.out.println(“rowCount:” + rowCount);
int i=0;
for (ScoreDoc scoreDoc : sd) {
i++;
if(i<start){
continue;
}
if(i>hm){
break;
}
Document doc = searcher.doc(scoreDoc.doc);
list.add(createObj(doc));
}


最新的排序写法:

            int hm = start + pageSize;
            Sort sort = new Sort();
            SortField sortField = new SortField("pdate", SortField.STRING, true);
            sort.setSort(sortField);
            TopDocs hits = searcher.search(bQuery, null, hm, sort);
            this.rowCount = hits.totalHits;
            this.pages = (rowCount - 1) / pageSize + 1; //计算总页数
            for (int i = start; i < hits.scoreDocs.length; i++) {
                ScoreDoc sdoc = hits.scoreDocs[i];
                Document doc = searcher.doc(sdoc.doc);
                list.add(createObj(doc));
            }

ps:
周一完成创建索引库定时任务,周二实现模糊查询中文分词高亮显示及分页,今天实现了通配符查询、嵌套查询、先排序后分页,从零接触到实现Lucene主要功能花了三天时间,当然,性能如何还待测试和优化。

2012八月20

Oracle SYSDATE

 select to_char(sysdate,'YYYY/MM/DD') FROM DUAL;             -- 2007/09/20            
 select to_char(sysdate,'YYYY') FROM DUAL;                   -- 2007                  
 select to_char(sysdate,'YYY') FROM DUAL;                    -- 007                   
 select to_char(sysdate,'YY') FROM DUAL;                     -- 07                    
 select to_char(sysdate,'MM') FROM DUAL;                     -- 09                    
 select to_char(sysdate,'DD') FROM DUAL;                     -- 20                    
 select to_char(sysdate,'D') FROM DUAL;                      -- 5                     
 select to_char(sysdate,'DDD') FROM DUAL;                    -- 263                   
 select to_char(sysdate,'WW') FROM DUAL;                     -- 38                    
 select to_char(sysdate,'W') FROM DUAL;                      -- 3                     
 select to_char(sysdate,'YYYY/MM/DD HH24:MI:SS') FROM DUAL;  -- 2007/09/20 15:24:13   
 select to_char(sysdate,'YYYY/MM/DD HH:MI:SS') FROM DUAL;    -- 2007/09/20 03:25:23   
 select to_char(sysdate,'J') FROM DUAL;                      -- 2454364               
 select to_char(sysdate,'RR/MM/DD') FROM DUAL;               -- 07/09/20
2012七月31

JS:实现复选框上下级联动

function sel(obj){
         var id=obj.value;
         var qx=document.getElementsByName("id");
         for(var i = 0; i < qx.length; i ++)
           {
               if(qx[i].type == "checkbox")
                {
                  var v=qx[i].value;
                  if(v!=""&&v.length>id.length&&v.startWith(id)){
                      if(obj.checked){
                          qx[i].checked=true;
                      } else{
                           qx[i].checked=false;
                      }
                  }
                   if(v!=""&&v.length<id.length&&id.startWith(v)){
                      if(obj.checked){
                      } else{
                           qx[i].checked=false;
                      }
                  }

                }
           }
<input onclick="sel(this)" type="checkbox" name="id" value="0001" />0001

<input onclick="sel(this)" type="checkbox" name="id" value="00010001" />00010001

<input onclick="sel(this)" type="checkbox" name="id" value="00010002" />00010002

<input onclick="sel(this)" type="checkbox" name="id" value="00010003" />00010003
2012六月20

Android:扫描获取AP信息

增加权限:

<uses-permission android:name=”android.permission.ACCESS_WIFI_STATE” />
<uses-permission android:name=”android.permission.CHANGE_WIFI_STATE” />

WifiManager wifiManager = (WifiManager) getSystemService(WIFI_SERVICE);
			   WifiInfo wifiInfo = wifiManager.getConnectionInfo();
			  showMsg(wifiInfo.toString());//自己的显示方法

			  wifiManager.startScan();
			  List mWifiList = wifiManager.getScanResults();
			  for(int i=0;i<mWifiList.size();i++)
				  logger.d(mWifiList.get(i).toString());//自己重构的日志方法
2012五月16

SQL语句:查询1公里范围内经纬度数据

查询1公里范围内的经纬度数据:

select 6371.012 *
       acos(cos(acos(-1) / 180 * d.LATITUDE) *
            cos(acos(-1) / 180 * 31.885972440801) *
            cos(acos(-1) / 180 * d.LONGITUDE - acos(-1) / 180 * 117.30923429642) +
            sin(acos(-1) / 180 * d.LATITUDE) *
            sin(acos(-1) / 180 * 31.885972440801))*1 as a,
       id,name 
  from loc_data d
 where 6371.012 *
       acos(cos(acos(-1) / 180 * d.LATITUDE) *
            cos(acos(-1) / 180 * 31.885972440801) *
            cos(acos(-1) / 180 * d.LONGITUDE - acos(-1) / 180 * 117.30923429642) +
            sin(acos(-1) / 180 * d.LATITUDE) *
            sin(acos(-1) / 180 * 31.885972440801))*1 < 1 order by a asc
2012三月2

MyEclispe 9.1 注册码

MyEclispe 9.1 注册:

http://www.lephones.info/servlet/MyEclipseGenServlet

Subscriber:wizzer

Subscription Code:jLR8ZC-855550-60585657570866770

2012一月31

解决思路:通过百度地图JS API将经纬度反解析为位置信息

使用百度地图JS API,制作一个自刷新页面,将从数据库查询出的经纬度作为一个坐标点传给百度API JS,当获得位置信息后执行JS里的AJAX方法访问另外一个页面,将地址保存入库。 

<script language=”JavaScript”>function myrefresh(){ window.location.reload(force=true);}setTimeout(‘myrefresh()’,1000);</script>

————————————————–

window.location.reload(force=true)

绕过缓存,从服务器下载文档。

window.location.reload(force=false)

这个是默认的,检测服务器上文档是否改变,如没有改变,就从缓存调出。

 

ps:因google反地址解析API有频率限制,所以只好用百度地图了,没有找到百度地图URL形式的API,只能另辟蹊跷。:)