又跑出bug来
90岁的IT男瘫软在床上,我说:“你起来吃口饭吧…” IT男说:“人老了,没胃口。” 我说:“楼下来了很多IT妹纸 …” IT男更加虚弱地回答道:“眼睛花了,看不清。” 这时,隔壁的程序猿跑过来说:“哎哟,你60年前写的代码,现在又跑出bug来!” IT男:“NND,快扶我起来!”
乞丐程序猿
我是个程序猿,一天我坐在路边一边喝水一边苦苦检查bug。这时一个乞丐在我边上坐下了,开始要饭,我觉得可怜就给了他1块钱,然后接着调试程序。他可能生意不好,就无聊的看看我在干什么,然后过了一会,他幽幽的说,这里少了个分号。
购机漫谈:Thinkpad X230
之前:
从上学那会儿拥有自己第一台电脑,到现在买过1台台式电脑、1台笔记本、2台Thinkpad。
第一台电脑是组装机04年买的,配置是赛扬1.7GHz单核、80G硬盘、256M内存、128M显存、17寸液晶显示器,好吧,那厚笨的显示器–,那会儿买电脑似乎还不是太普遍的事情,在村子里也算少数的几家买电脑的吧。那年放暑假的,甚至坐长途汽车把主机和显示器带回家玩,拨号上网可贵了,在现在看来真难想象,哪那么大的玩瘾呢。
记得还玩过windows98系统,后来换了XP。玩的最多的游戏应该是CS和泡泡堂、冒险岛吧,再多的不记得了。后来毕业了,电脑搬回老家贱卖。5~
第二台电脑是联想笔记本,好像是第一款仿thinkpad的旭日410L,1.6GHz赛扬二代、内存加到512M、硬盘80G,XP系统。06年那会儿刚从实习单位出来找到家软件公司,工作尚未稳定从公司拿钱买了电脑,之后每个月从工资里扣钱。有了笔记本就方便许多了,随身携带,似乎上班下班回老家都不离身,这个习惯也延续至今。
第三台电脑是Thinkpad R400,P8700 双核双线程CPU、2G内存、250G硬盘,10年公司为了挽留我出了5K,自己又出了2K买的。410L给了三姐家的小外甥,这会儿不知道还能不能开机了。回来陆续把R400配置加到8G内存、500G硬盘+250G光驱位硬盘,CPU是无法升级了,技术日新月异,性能是跟不上时代了。用了两年多,USB口插坏修过,键盘鼠标也换了,前些日子交给朋友帮忙处理卖掉,不知道值几个钱。
现在:
对比Macbook的精贵,搞软件开发的感觉用起来太花哨,还是看中Thinkpad的沉稳商务。CPU都出来i7四核八线程了,流行起了固态硬盘,可是那价格动辄上万,虽然工资一年年涨可买了房子后感觉日子不好过啊,真心买不起行货。
时常去论坛淘宝等地方转悠,看中X230轻巧,新的巧克力键盘、背光键盘灯、指纹、IPS屏幕,十一后整体降了500¥,想想背着厚重的R400上班下班,一狠心分期在淘宝上买了水货。当然一定要找信誉好的商家,为避免广告就不贴地址了。加128G固态硬盘,幸而电脑预留了个mSata卡槽,不用换下原硬盘。
看看详细配置:
电脑型号 联想 ThinkPad X230 笔记本电脑 操作系统 Windows 7 旗舰版 64位 SP1 ( DirectX 11 ) 处理器 英特尔 第三代酷睿 i7-3520M @ 2.90GHz 双核四线程 主板 联想 2324B76 (英特尔 Ivy Bridge) 内存 8 GB ( 三星 DDR3 1600MHz ) 主硬盘 Crucial M4-CT128M4SSD3 ( 128 GB / 固态硬盘 ) 副硬盘 日立 HTS725050A7E630 (500GB / 7200 转) 显卡 英特尔 Ivy Bridge Graphics Controller ( 2112 MB / 联想 ) 显示器 联想 LEN40E2 ( 12.7 英寸 ) 声卡 瑞昱 ALC269 @ 英特尔 Panther Point High Definition Audio Controller 网卡 英特尔 82579LM Gigabit Network Connection / 联想
这下该心满意足了吧,可这样的配置并不是最好最高的,相信没过多久就会过时,而两年或数年之后再来回头看看这篇文章,相比更有一番感慨吧。LOL玩英雄联盟~
Apache + Tomcat 负载均衡单机部署实例
一、准备工作
Tomcat6 : http://tomcat.apache.org/download-60.cgi
下载:apache-tomcat-6.0.36.exe
apache httpd server 2.2: http://www.fayea.com/apache-mirror//httpd/binaries/win32/
下载:httpd-2.2.22-win32-x86-no_ssl.msi
apache tomcat connector: http://archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/win32/jk-1.2.31/
下载:mod_jk-1.2.31-httpd-2.2.3.so
二、安装配置
安装路径:
E:\Apache2.2
E:\apache-tomcat-6.0.36-1
E:\apache-tomcat-6.0.36-2
项目路径:
E:\work\demo
1、Apache 配置
修改 E:\Apache2.2\conf\httpd.conf 文件。
1)、加载外部配置文件:
文件最后一行加上
include conf/mod_jk.conf
2)配置项目路径:
<IfModule alias_module> Alias /demo "E:/work/demo" ScriptAlias /cgi-bin/ "E:/Apache2.2/cgi-bin/" </IfModule>
3)配置目录权限:
<Directory "E:/work/demo"> Order Deny,Allow Allow from all </Directory>
4)配置默认首页:
增加 index.jsp
<IfModule dir_module> DirectoryIndex index.jsp index.html </IfModule>
5)增加 E:\Apache2.2\conf\mod_jk.conf 文件内容:
LoadModule jk_module modules/mod_jk-1.2.31-httpd-2.2.3.so JkWorkersFile conf/workers.properties #指定那些请求交给tomcat处理,"controller"为在workers.propertise里指定的负载分配控制器名 JkMount /*.jsp controller
同时将 mod_jk-1.2.31-httpd-2.2.3.so 文件放入 E:\Apache2.2\modules 文件夹下。
6)增加 E:\Apache2.2\conf\workers.properties 文件内容:
#server worker.list = controller #========tomcat1======== worker.tomcat1.port=10009 worker.tomcat1.host=localhost worker.tomcat1.type=ajp13 worker.tomcat1.lbfactor = 1 #========tomcat2======== worker.tomcat2.port=11009 worker.tomcat2.host=localhost worker.tomcat2.type=ajp13 worker.tomcat2.lbfactor = 1 #========controller,负载均衡控制器======== worker.controller.type=lb worker.controller.balanced_workers=tomcat1,tomcat2 worker.controller.sticky_session=false worker.controller.sticky_session_force=1 #worker.controller.sticky_session=1
2、Tomcat 配置
1)Tomcat-1 配置
E:\apache-tomcat-6.0.36-1\conf\server.xml
需要修改端口的地方:
<Server port="10005" shutdown="SHUTDOWN">
<Connector port="10080" URIEncoding="GBK" protocol="HTTP/1.1" connectionTimeout="20000" keepAliveTimeout="15000" maxKeepAliveRequests="1" redirectPort="8443" />
<Connector port="10009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
1)Tomcat-2 配置
E:\apache-tomcat-6.0.36-2\conf\server.xml
需要修改端口的地方:
<Server port="11005" shutdown="SHUTDOWN">
<Connector port="11080" URIEncoding="GBK" protocol="HTTP/1.1" connectionTimeout="20000" keepAliveTimeout="15000" maxKeepAliveRequests="1" redirectPort="8443" />
<Connector port="11009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
三、项目测试
项目 web.xml 文件需在 <web-app>下增加<distributable/>。
test.jsp
<%@ page contentType="text/html; charset=GBK"%> <%@ page import="java.util.*"%> <html> <head> <title>Cluster App Test</title> </head> <body> Server Info: <% out.println(request.getLocalAddr() + " : " + request.getLocalPort() + "<br>"); %> <% out.println("<br> ID " + session.getId() + "<br>"); // 如果有新的 Session 属性设置 String dataName = request.getParameter("dataName"); if (dataName != null && dataName.length() > 0) { String dataValue = request.getParameter("dataValue"); session.setAttribute(dataName, dataValue); } out.println("<b>Session 列表</b><br>"); System.out.println("============================"); Enumeration e = session.getAttributeNames(); while (e.hasMoreElements()) { String name = (String) e.nextElement(); String value = session.getAttribute(name).toString(); out.println(name + " = " + value + "<br>"); System.out.println(name + " = " + value); } %> <form action="test.jsp" method="POST"> 名称: <input type=text size=20 name="dataName"> <br> 值: <input type=text size=20 name="dataValue"> <br> <input type=submit> </form> </body> </html>
先启动Apache2服务,之后依次启动两个tomcat。
分别访问:
http://127.0.0.1:10080/test.jsp
http://127.0.0.1:11080/test.jsp
http://127.0.0.1/test.jsp
接下来测试你懂得,三者 seesion 内容一致即配置成功。
四、注意事项
1、若测试结果不成功,可以查看日志看看报什么错误,是否配置疏忽了什么环节,apache的权限有没有配置等,注意版本;
2、放在session里的对象需要序列化,即类 implements Serializable。
吃!赶紧吃!
去姐姐家蹭饭蒸螃蟹,姐夫夹了一个给我,一个给四岁的外甥女。“爸爸你吃”。“爸爸不吃,留给小姨和宝宝吃”。小外甥女说“爸爸你不能这样,你要对自己好一点,你天天跟牛似的还不吃饭,你累死了,会有别的叔叔花你的钱,住你的房子,睡你的老婆,打你的宝宝的!吃!赶紧吃!”
我会编程
街边,一对情侣在吵架。女孩对男孩说,“我们分手吧!”男孩沉默半天,开口问道,“我能再说最后一句话吗?”“说吧,婆婆妈妈的。”“我会编程……”“会编程有个屁用啊,现在到处都是会编程的人!”男孩涨红了脸,接着说道,“我会编程……我会变成……童话里,你爱的那个天使……”
Lucene 3.6.1:中文分词、创建索引库、排序、多字段分页查询以及高亮显示源码
1、准备工作
下载lucene 3.6.1 : http://lucene.apache.org/
下载中文分词IK Analyzer: http://code.google.com/p/ik-analyzer/downloads/list (注意下载的是IK Analyzer 2012_u5_source.zip,其他版本有bug)
下载solr 3.6.1: http://lucene.apache.org/solr/(编译IK Analyzer时需引用包)
OK,将lucene 、solr 相关包(lucene-core-3.6.1.jar、lucene-highlighter-3.6.1.jar、lucene-analyzers-3.6.1.jar、apache-solr-core-3.6.1.jar、apache-solr-solrj-3.6.1.jar)拷贝到项目lib下,IK源码置于项目src下。
2、从Oracle数据库中取数据创建索引(使用IK分词)
package lucene.util; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.store.Directory; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.util.Version; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.wltea.analyzer.lucene.IKAnalyzer; import java.sql.Connection; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Date; import modules.gk.Gk_info; import modules.gk.Gk_infoSub; import web.sys.Globals; import web.db.DBConnector; import web.db.ObjectCtl; import web.util.StringUtil; //Wizzer.cn public class LuceneIndex { IndexWriter writer = null; FSDirectory dir = null; boolean create = true;//是否初始化&覆盖索引库 public void init() { long a1 = System.currentTimeMillis(); System.out.println("[Lucene 开始执行:" + new Date() + "]"); Connection con = DBConnector.getconecttion(); //取得一个数据库连接 try { final File docDir = new File(Globals.SYS_COM_CONFIG.get("sys.index.path").toString());//E:\lucene if (!docDir.exists()) { docDir.mkdirs(); } String cr = Globals.SYS_COM_CONFIG.get("sys.index.create").toString();//true or false if ("false".equals(cr.toLowerCase())) { create = false; } dir = FSDirectory.open(docDir); // Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_36); Analyzer analyzer = new IKAnalyzer(true); IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_36, analyzer); if (create) { // Create a new index in the directory, removing any // previously indexed documents: iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE); } else { // Add new documents to an existing index: iwc.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND); } writer = new IndexWriter(dir, iwc); String sql = "SELECT indexno,title,describes,pdate,keywords FROM TABLEA WHERE STATE=1 AND SSTAG<>1 "; int rowCount = ObjectCtl.getRowCount(con, sql); int pageSize = StringUtil.StringToInt(Globals.SYS_COM_CONFIG.get("sys.index.size").toString()); //每页记录数 int pages = (rowCount - 1) / pageSize + 1; //计算总页数 ArrayList list = null; Gk_infoSub gk = null; for (int i = 1; i < pages+1; i++) { long a = System.currentTimeMillis(); list = ObjectCtl.listPage(con, sql, i, pageSize, new Gk_infoSub()); for (int j = 0; j < list.size(); j++) { gk = (Gk_infoSub) list.get(j); Document doc = new Document(); doc.add(new Field("indexno", StringUtil.null2String(gk.getIndexno()), Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS));//主键不分词 doc.add(new Field("title", StringUtil.null2String(gk.getTitle()), Field.Store.YES, Field.Index.ANALYZED)); doc.add(new Field("describes", StringUtil.null2String(gk.getDescribes()), Field.Store.YES, Field.Index.ANALYZED)); doc.add(new Field("pdate", StringUtil.null2String(gk.getPdate()), Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS));//日期不分词 doc.add(new Field("keywords", StringUtil.null2String(gk.getKeywords()), Field.Store.YES, Field.Index.ANALYZED)); writer.addDocument(doc); ObjectCtl.executeUpdateBySql(con,"UPDATE TABLEA SET SSTAG=1 WHERE indexno='"+gk.getIndexno()+"'");//更新已索引状态 } long b = System.currentTimeMillis(); long c = b - a; System.out.println("[Lucene " + rowCount + "条," + pages + "页,第" + i + "页花费时间:" + c + "毫秒]"); } writer.commit(); } catch (Exception e) { e.printStackTrace(); } finally { DBConnector.freecon(con); //释放数据库连接 try { if (writer != null) { writer.close(); } } catch (CorruptIndexException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (dir != null && IndexWriter.isLocked(dir)) { IndexWriter.unlock(dir);//注意解锁 } } catch (IOException e) { e.printStackTrace(); } } } long b1 = System.currentTimeMillis(); long c1 = b1 - a1; System.out.println("[Lucene 执行完毕,花费时间:" + c1 + "毫秒,完成时间:" + new Date() + "]"); } }
3、单字段查询以及多字段分页查询高亮显示
package lucene.util; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.store.Directory; import org.apache.lucene.search.*; import org.apache.lucene.search.highlight.SimpleHTMLFormatter; import org.apache.lucene.search.highlight.Highlighter; import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.queryParser.MultiFieldQueryParser; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.KeywordAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.util.Version; import modules.gk.Gk_infoSub; import java.util.ArrayList; import java.io.File; import java.io.StringReader; import java.lang.reflect.Constructor; import web.util.StringUtil; import web.sys.Globals; import org.wltea.analyzer.lucene.IKAnalyzer; //Wizzer.cn public class LuceneQuery { private static String indexPath;// 索引生成的目录 private int rowCount;// 记录数 private int pages;// 总页数 private int currentPage;// 当前页数 private int pageSize; //每页记录数 public LuceneQuery() { this.indexPath = Globals.SYS_COM_CONFIG.get("sys.index.path").toString(); } public int getRowCount() { return rowCount; } public int getPages() { return pages; } public int getPageSize() { return pageSize; } public int getCurrentPage() { return currentPage; } /** * 函数功能:根据字段查询索引 */ public ArrayList queryIndexTitle(String keyWord, int curpage, int pageSize) { ArrayList list = new ArrayList(); try { if (curpage <= 0) { curpage = 1; } if (pageSize <= 0) { pageSize = 20; } this.pageSize = pageSize; //每页记录数 this.currentPage = curpage; //当前页 int start = (curpage - 1) * pageSize; Directory dir = FSDirectory.open(new File(indexPath)); IndexReader reader = IndexReader.open(dir); IndexSearcher searcher = new IndexSearcher(reader); Analyzer analyzer = new IKAnalyzer(true); QueryParser queryParser = new QueryParser(Version.LUCENE_36, "title", analyzer); queryParser.setDefaultOperator(QueryParser.AND_OPERATOR); Query query = queryParser.parse(keyWord); int hm = start + pageSize; TopScoreDocCollector res = TopScoreDocCollector.create(hm, false); searcher.search(query, res); SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<span style='color:red'>", "</span>"); Highlighter highlighter = new Highlighter(simpleHTMLFormatter, new QueryScorer(query)); this.rowCount = res.getTotalHits(); this.pages = (rowCount - 1) / pageSize + 1; //计算总页数 TopDocs tds = res.topDocs(start, pageSize); ScoreDoc[] sd = tds.scoreDocs; for (int i = 0; i < sd.length; i++) { Document hitDoc = reader.document(sd[i].doc); list.add(createObj(hitDoc, analyzer, highlighter)); } } catch (Exception e) { e.printStackTrace(); } return list; } /** * 函数功能:根据字段查询索引 */ public ArrayList queryIndexFields(String allkeyword, String onekeyword, String nokeyword, int curpage, int pageSize) { ArrayList list = new ArrayList(); try { if (curpage <= 0) { curpage = 1; } if (pageSize <= 0) { pageSize = 20; } this.pageSize = pageSize; //每页记录数 this.currentPage = curpage; //当前页 int start = (curpage - 1) * pageSize; Directory dir = FSDirectory.open(new File(indexPath)); IndexReader reader = IndexReader.open(dir); IndexSearcher searcher = new IndexSearcher(reader); BooleanQuery bQuery = new BooleanQuery(); //组合查询 if (!"".equals(allkeyword)) {//包含全部关键词 KeywordAnalyzer analyzer = new KeywordAnalyzer(); BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//AND Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, allkeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer); bQuery.add(query, BooleanClause.Occur.MUST); //AND } if (!"".equals(onekeyword)) { //包含任意关键词 Analyzer analyzer = new IKAnalyzer(true); BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//OR Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, onekeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer); bQuery.add(query, BooleanClause.Occur.MUST); //AND } if (!"".equals(nokeyword)) { //排除关键词 Analyzer analyzer = new IKAnalyzer(true); BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD};//NOT Query query = MultiFieldQueryParser.parse(Version.LUCENE_36, nokeyword, new String[]{"title", "describes", "keywords"}, flags, analyzer); bQuery.add(query, BooleanClause.Occur.MUST_NOT); //AND } int hm = start + pageSize; TopScoreDocCollector res = TopScoreDocCollector.create(hm, false); searcher.search(bQuery, res); SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<span style='color:red'>", "</span>"); Highlighter highlighter = new Highlighter(simpleHTMLFormatter, new QueryScorer(bQuery)); this.rowCount = res.getTotalHits(); this.pages = (rowCount - 1) / pageSize + 1; //计算总页数 System.out.println("rowCount:" + rowCount); TopDocs tds = res.topDocs(start, pageSize); ScoreDoc[] sd = tds.scoreDocs; Analyzer analyzer = new IKAnalyzer(); for (int i = 0; i < sd.length; i++) { Document hitDoc = reader.document(sd[i].doc); list.add(createObj(hitDoc, analyzer, highlighter)); } } catch (Exception e) { e.printStackTrace(); } return list; } /** * 创建返回对象(高亮) */ private synchronized static Object createObj(Document doc, Analyzer analyzer, Highlighter highlighter) { Gk_infoSub gk = new Gk_infoSub(); try { if (doc != null) { gk.setIndexno(StringUtil.null2String(doc.get("indexno"))); gk.setPdate(StringUtil.null2String(doc.get("pdate"))); String title = StringUtil.null2String(doc.get("title")); gk.setTitle(title); if (!"".equals(title)) { highlighter.setTextFragmenter(new SimpleFragmenter(title.length())); TokenStream tk = analyzer.tokenStream("title", new StringReader(title)); String htext = StringUtil.null2String(highlighter.getBestFragment(tk, title)); if (!"".equals(htext)) { gk.setTitle(htext); } } String keywords = StringUtil.null2String(doc.get("keywords")); gk.setKeywords(keywords); if (!"".equals(keywords)) { highlighter.setTextFragmenter(new SimpleFragmenter(keywords.length())); TokenStream tk = analyzer.tokenStream("keywords", new StringReader(keywords)); String htext = StringUtil.null2String(highlighter.getBestFragment(tk, keywords)); if (!"".equals(htext)) { gk.setKeywords(htext); } } String describes = StringUtil.null2String(doc.get("describes")); gk.setDescribes(describes); if (!"".equals(describes)) { highlighter.setTextFragmenter(new SimpleFragmenter(describes.length())); TokenStream tk = analyzer.tokenStream("keywords", new StringReader(describes)); String htext = StringUtil.null2String(highlighter.getBestFragment(tk, describes)); if (!"".equals(htext)) { gk.setDescribes(htext); } } } return gk; } catch (Exception e) { e.printStackTrace(); return null; } finally { gk = null; } } private synchronized static Object createObj(Document doc) { Gk_infoSub gk = new Gk_infoSub(); try { if (doc != null) { gk.setIndexno(StringUtil.null2String(doc.get("indexno"))); gk.setPdate(StringUtil.null2String(doc.get("pdate"))); gk.setTitle(StringUtil.null2String(doc.get("title"))); gk.setKeywords(StringUtil.null2String(doc.get("keywords"))); gk.setDescribes(StringUtil.null2String(doc.get("describes"))); } return gk; } catch (Exception e) { e.printStackTrace(); return null; } finally { gk = null; } } }
单字段查询:
long a = System.currentTimeMillis(); try { int curpage = StringUtil.StringToInt(StringUtil.null2String(form.get("curpage"))); int pagesize = StringUtil.StringToInt(StringUtil.null2String(form.get("pagesize"))); String title = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("title"))); LuceneQuery lu = new LuceneQuery(); form.addResult("list", lu.queryIndexTitle(title, curpage, pagesize)); form.addResult("curPage", lu.getCurrentPage()); form.addResult("pageSize", lu.getPageSize()); form.addResult("rowCount", lu.getRowCount()); form.addResult("pageCount", lu.getPages()); } catch (Exception e) { e.printStackTrace(); } long b = System.currentTimeMillis(); long c = b - a; System.out.println("[搜索信息花费时间:" + c + "毫秒]");
多字段查询:
long a = System.currentTimeMillis(); try { int curpage = StringUtil.StringToInt(StringUtil.null2String(form.get("curpage"))); int pagesize = StringUtil.StringToInt(StringUtil.null2String(form.get("pagesize"))); String allkeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("allkeyword"))); String onekeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("onekeyword"))); String nokeyword = StringUtil.replaceLuceneStr(StringUtil.null2String(form.get("nokeyword"))); LuceneQuery lu = new LuceneQuery(); form.addResult("list", lu.queryIndexFields(allkeyword,onekeyword,nokeyword, curpage, pagesize)); form.addResult("curPage", lu.getCurrentPage()); form.addResult("pageSize", lu.getPageSize()); form.addResult("rowCount", lu.getRowCount()); form.addResult("pageCount", lu.getPages()); } catch (Exception e) { e.printStackTrace(); } long b = System.currentTimeMillis(); long c = b - a; System.out.println("[高级检索花费时间:" + c + "毫秒]");
4、Lucene通配符查询
BooleanQuery bQuery = new BooleanQuery(); //组合查询 if (!"".equals(title)) { WildcardQuery w1 = new WildcardQuery(new Term("title", title+ "*")); bQuery.add(w1, BooleanClause.Occur.MUST); //AND } int hm = start + pageSize; TopScoreDocCollector res = TopScoreDocCollector.create(hm, false); searcher.search(bQuery, res);
5、Lucene嵌套查询
实现SQL:(unitid like ‘unitid%’ and idml like ‘id2%’) or (tounitid like ‘unitid%’ and tomlid like ‘id2%’ and tostate=1)
BooleanQuery bQuery = new BooleanQuery(); BooleanQuery b1 = new BooleanQuery(); WildcardQuery w1 = new WildcardQuery(new Term("unitid", unitid + "*")); WildcardQuery w2 = new WildcardQuery(new Term("idml", id2 + "*")); b1.add(w1, BooleanClause.Occur.MUST);//AND b1.add(w2, BooleanClause.Occur.MUST);//AND bQuery.add(b1, BooleanClause.Occur.SHOULD);//OR BooleanQuery b2 = new BooleanQuery(); WildcardQuery w3 = new WildcardQuery(new Term("tounitid", unitid + "*")); WildcardQuery w4 = new WildcardQuery(new Term("tomlid", id2 + "*")); WildcardQuery w5 = new WildcardQuery(new Term("tostate", "1")); b2.add(w3, BooleanClause.Occur.MUST);//AND b2.add(w4, BooleanClause.Occur.MUST);//AND b2.add(w5, BooleanClause.Occur.MUST);//AND bQuery.add(b2, BooleanClause.Occur.SHOULD);//OR
6、Lucene先根据时间排序后分页
下面这种方式不太合理,建议在创建索引库的时候排序,这样查询的时候只用分页即可,若有多个排序条件可单独创建索引库。
int hm = start + pageSize;
Sort sort = new Sort(new SortField(“pdate”, SortField.STRING, true));
TopScoreDocCollector res = TopScoreDocCollector.create(pageSize, false);
searcher.search(bQuery, res);
this.rowCount = res.getTotalHits();
this.pages = (rowCount – 1) / pageSize + 1; //计算总页数
TopDocs tds =searcher.search(bQuery,rowCount,sort);// res.topDocs(start, pageSize);
ScoreDoc[] sd = tds.scoreDocs;
System.out.println(“rowCount:” + rowCount);
int i=0;
for (ScoreDoc scoreDoc : sd) {
i++;
if(i<start){
continue;
}
if(i>hm){
break;
}
Document doc = searcher.doc(scoreDoc.doc);
list.add(createObj(doc));
}
最新的排序写法:
int hm = start + pageSize; Sort sort = new Sort(); SortField sortField = new SortField("pdate", SortField.STRING, true); sort.setSort(sortField); TopDocs hits = searcher.search(bQuery, null, hm, sort); this.rowCount = hits.totalHits; this.pages = (rowCount - 1) / pageSize + 1; //计算总页数 for (int i = start; i < hits.scoreDocs.length; i++) { ScoreDoc sdoc = hits.scoreDocs[i]; Document doc = searcher.doc(sdoc.doc); list.add(createObj(doc)); }
ps:
周一完成创建索引库定时任务,周二实现模糊查询中文分词高亮显示及分页,今天实现了通配符查询、嵌套查询、先排序后分页,从零接触到实现Lucene主要功能花了三天时间,当然,性能如何还待测试和优化。
Oracle SYSDATE
select to_char(sysdate,'YYYY/MM/DD') FROM DUAL; -- 2007/09/20 select to_char(sysdate,'YYYY') FROM DUAL; -- 2007 select to_char(sysdate,'YYY') FROM DUAL; -- 007 select to_char(sysdate,'YY') FROM DUAL; -- 07 select to_char(sysdate,'MM') FROM DUAL; -- 09 select to_char(sysdate,'DD') FROM DUAL; -- 20 select to_char(sysdate,'D') FROM DUAL; -- 5 select to_char(sysdate,'DDD') FROM DUAL; -- 263 select to_char(sysdate,'WW') FROM DUAL; -- 38 select to_char(sysdate,'W') FROM DUAL; -- 3 select to_char(sysdate,'YYYY/MM/DD HH24:MI:SS') FROM DUAL; -- 2007/09/20 15:24:13 select to_char(sysdate,'YYYY/MM/DD HH:MI:SS') FROM DUAL; -- 2007/09/20 03:25:23 select to_char(sysdate,'J') FROM DUAL; -- 2454364 select to_char(sysdate,'RR/MM/DD') FROM DUAL; -- 07/09/20
JS:实现复选框上下级联动
function sel(obj){ var id=obj.value; var qx=document.getElementsByName("id"); for(var i = 0; i < qx.length; i ++) { if(qx[i].type == "checkbox") { var v=qx[i].value; if(v!=""&&v.length>id.length&&v.startWith(id)){ if(obj.checked){ qx[i].checked=true; } else{ qx[i].checked=false; } } if(v!=""&&v.length<id.length&&id.startWith(v)){ if(obj.checked){ } else{ qx[i].checked=false; } } } }
<input onclick="sel(this)" type="checkbox" name="id" value="0001" />0001 <input onclick="sel(this)" type="checkbox" name="id" value="00010001" />00010001 <input onclick="sel(this)" type="checkbox" name="id" value="00010002" />00010002 <input onclick="sel(this)" type="checkbox" name="id" value="00010003" />00010003
慈禧
慈禧借口建水师学堂,挪用海军经费修颐和园。大臣问:昆明湖里建海军,练成之后军舰怎么开出去呢?慈禧:我请大仙算过,海军练成之日会天降百年一遇的暴雨,京城会变泽国,到时海军便可从昆明湖发兵直取东瀛。大臣担心雨下不来,慈禧说:大仙说了,只要是5千年一遇的朝廷,雨肯定下。结果,那年雨没下~
估计以后没人劫机了
估计以后没人劫机了,劫匪刚一站起来,大喝:劫机!周围旅客脸都笑烂了。劫匪头顶上金光闪闪一堆大字:亲,300万的房子哦,100万的现金哦,亲,还带宝马奥迪专车哦,终身免费乘机呵。唉,劫匪估计得被活活亲死,每一个航班乘客就像等待初恋一样等待劫匪。
诺基亚奄奄一息
诺基亚被安卓和苹果打趴在地,于是喊微软来帮忙,大半天后微软终于开着车来了,结果直接从诺基亚身上轧了过去,诺基亚奄奄一息地说”老子还没死”。微软回一句”哦哦,等我倒车”……
Android:扫描获取AP信息
增加权限:
<uses-permission android:name=”android.permission.ACCESS_WIFI_STATE” />
<uses-permission android:name=”android.permission.CHANGE_WIFI_STATE” />
WifiManager wifiManager = (WifiManager) getSystemService(WIFI_SERVICE); WifiInfo wifiInfo = wifiManager.getConnectionInfo(); showMsg(wifiInfo.toString());//自己的显示方法 wifiManager.startScan(); List mWifiList = wifiManager.getScanResults(); for(int i=0;i<mWifiList.size();i++) logger.d(mWifiList.get(i).toString());//自己重构的日志方法
酱缸
一条鱼掉到酱缸里,到处说环境差。一大蛆爬过来,指着鱼鼻子骂:丫的闭嘴!你不能用鱼缸标准评价酱缸,根据我们酱缸的标准已经很好了,完全符合酱缸的发展水平和技术条件,再说我们秉承的是先污染后治理的原则,调配这一缸大酱我们容易吗? 当大部分鱼认命时,蛆们却偷偷滴爬出酱缸,变成苍蝇飞跑了。
八大宇宙未解之谜
《科学》杂志盘点的八大宇宙未解之谜分别是:
1、暗能量,构成现存宇宙的73%但从未被观察到或测量过。暗能量的存在是“应需而生”的,它能平衡关于宇宙的数学公式,但可能永远不会被观测到;
2、暗物质,与暗能量紧密相关,被描述为将宇宙万物粘合在一起的“胶水”。为《科学》杂志撰写相关论文的阿德里安·丘认为,与暗能量不同,科学家们很可能有朝一日能切实观测到这种物质;
3、重子哪里去了?重子是一种能构成特殊物质的颗粒,但出于某些原因,当研究人员把暗能量、暗物质相加并把其它归于重子时,研究者所得的结果竟不是100%;
4、为什么恒星会爆炸?人们已经对有关恒星形成以及太阳系形成的许多过程有了初步认知,但科学家们承认,他们仍不能完全理解当一个恒星爆炸时其内部情况到底是怎样的,只知道爆炸后会形成超新星;
5、是什么使宇宙再电离?自宇宙大爆炸后数十万年,电子被从原子上剥离,但目前尚不知这是为什么;
6、各种能量充沛的宇宙射线的源头是什么?尽管地球的大气层能帮助我们抵挡住大多数宇宙射线,但我们每天仍会受到这些射线的“轰击”,科学家们至今无法就这些射线的源头达成共识;
7、为什么我们的太阳系如此独特?我们所在的太阳系是按照逻辑逐步形成的,还是误打误撞罢了?没人真正知晓。
8、为什么日冕那么热?专研太阳的科学家们始终想不明白。日冕是太阳的最外层部分,但其温度之高仍超乎想象。距离我们最近的这颗恒星所拥有的这层奇怪“分层”仍旧是个谜。
地主
一直以来。有个叫周一的地主,其性格凶残变态!残害亲兄弟周六、周日,还曾将亲妹妹周五许配给隔壁村的一个叫加班的凶残地主!不仅这样他还带着周二、周三、周四,横行乡里,鱼肉百姓,其罪行累累,令人发指!使众百姓深处水深火热之中!!
百万富翁
主持人采访一位18岁的百万富翁,问他是怎么做到的。"其实我没有受到什么正规教育,一直在家里呆着。"主持人:"那一定是你的父母把你培养成杰出的人才了。"也没有,我父母也没教过我。18岁生日那天,我父母把我叫到身边,给了我一个存折说:孩子,这是你不上学这几年攒下的钱。
SQL语句:查询1公里范围内经纬度数据
查询1公里范围内的经纬度数据:
select 6371.012 * acos(cos(acos(-1) / 180 * d.LATITUDE) * cos(acos(-1) / 180 * 31.885972440801) * cos(acos(-1) / 180 * d.LONGITUDE - acos(-1) / 180 * 117.30923429642) + sin(acos(-1) / 180 * d.LATITUDE) * sin(acos(-1) / 180 * 31.885972440801))*1 as a, id,name from loc_data d where 6371.012 * acos(cos(acos(-1) / 180 * d.LATITUDE) * cos(acos(-1) / 180 * 31.885972440801) * cos(acos(-1) / 180 * d.LONGITUDE - acos(-1) / 180 * 117.30923429642) + sin(acos(-1) / 180 * d.LATITUDE) * sin(acos(-1) / 180 * 31.885972440801))*1 < 1 order by a asc