??xml version="1.0" encoding="utf-8" standalone="yes"?>国产午夜在线观看,97视频精品,国产日韩欧美激情http://www.aygfsteel.com/adapterofcoms/category/43828.htmlzh-cnSun, 28 Mar 2010 02:59:29 GMTSun, 28 Mar 2010 02:59:29 GMT60在运行时,你能修改final field的值吗?http://www.aygfsteel.com/adapterofcoms/articles/315748.htmladapterofcomsadapterofcomsThu, 18 Mar 2010 01:36:00 GMThttp://www.aygfsteel.com/adapterofcoms/articles/315748.htmlhttp://www.aygfsteel.com/adapterofcoms/comments/315748.htmlhttp://www.aygfsteel.com/adapterofcoms/articles/315748.html#Feedback0http://www.aygfsteel.com/adapterofcoms/comments/commentRss/315748.htmlhttp://www.aygfsteel.com/adapterofcoms/services/trackbacks/315748.html 

以[final int x=911] , [static final int x=912]Z,jdk1.6.0_16(Z如此版本详细,是因Z面还有个jdk的bug).

样例c?

class Test { 
 private final  int x=911;//modifiers:final->18,non-final->2
 static final private  int y=912;//modifiers:final->26,non-final->10 
 public int getX(){
  return x;
 }  
 public static int getY(){
  return y;
 } 

 Java中的final field意指帔R,赋g?不可改变.~译器会对final fieldq行如下的优?

e.g:

Test t=new Test();

凡是在程序中对t.x的引?~译器都以字面?11替换,getX()中的return x也会被替换成return 911;

所以就在q行时你改变了x的g无济于事,~译器对它们q行的是静态编?

但是Test.class.getDeclaredField("x").getInt(t)除外;

 

那么如何在运行时改变final field x的值呢?

private final  int x=911;Field.modifiers?8,而private int x=911;Field.modifiers?.

所以如果我们修改Field[Test.class.getDeclaredField("x")].modifiers?8[final]变ؓ2[non-final],那么你就可以修改x的g.

 Test tObj=new Test();  
 Field f_x=Test.class.getDeclaredField("x");  
  
  //修改modifiers 18->2
  Field f_f_x=f_x.getClass().getDeclaredField("modifiers");
  f_f_x.setAccessible(true);  
  f_f_x.setInt(f_x, 2/*non-final*/);
  
  f_x.setAccessible(true);
  f_x.setInt(tObj, 110);//改变x的gؓ110.  
  System.out.println("静态编译的x?"+tObj.getX()+".------.q行时改变了的?10:"+f_x.getInt(tObj));
   
  f_x.setInt(tObj, 111);//你可以l改变x的gؓ.  
  System.out.println(f_x.getInt(tObj));

但是x复原来的modifiers,f_f_x.setInt(f_x, 18/*final*/);q是无效?因ؓField只会初始化它的FieldAccessor引用一?

 

在上面的q程?我还发现了个jdk bug,你如果将上面的红色代码改为如下的代码:

f_f_x.setInt(f_x, 10/*q个数值是static non-final modifiers,而x?strong>non-static?q样׃使f_x得到一个static FieldAccessor*/);那么会引发A fatal error has been detected by the Java Runtime Environment.q生相应的err log文g.昄JVM没有对这U情况加以处?我已提交to sun bug report site. 

sun ?010-3-26通知?他们已承认该bug,bug id : 6938467.发布到外|可能有一C天的延迟.

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6938467 

 

 

 



adapterofcoms 2010-03-18 09:36 发表评论
]]>
MINA,xSocket同样的性能~陷及陷?Grizzly betterhttp://www.aygfsteel.com/adapterofcoms/articles/314560.htmladapterofcomsadapterofcomsFri, 05 Mar 2010 01:37:00 GMThttp://www.aygfsteel.com/adapterofcoms/articles/314560.htmlhttp://www.aygfsteel.com/adapterofcoms/comments/314560.htmlhttp://www.aygfsteel.com/adapterofcoms/articles/314560.html#Feedback0http://www.aygfsteel.com/adapterofcoms/comments/commentRss/314560.htmlhttp://www.aygfsteel.com/adapterofcoms/services/trackbacks/314560.htmlMINA,Grizzly[grizzly-nio-framework],xSocket都是Z java nio?server framework.
q里?strong>性能~陷的焦?/strong>是指当一条channel上的SelectionKey.OP_READ ready?1.是由select threadd数据之后再分发给应用E序的handler,2.q是直接分?由handler thread来负责读数据和handle.
mina,xsocket?strong>1. grizzly-nio-framework?strong>2.
管读channel buffer中bytes是很快的,但是如果我们攑֤,当连接channel辑ֈ上万数量U?甚至更多,q种延迟响应的效果将会愈加明?
MINA:
for all selectedKeys
{
    read data then fireMessageReceived.
}
xSocket:
for all selectedKeys
{
    read data ,append it to readQueue then performOnData.
}
其中mina在fireMessageReceived时没有用threadpool来分?所以需要应用程序在handler.messageReceived中再分发.而xsocket的performOnData默认是分发给threadpool[WorkerPool],WorkerPool虽然解决了线E池中的U程不能充到最大的问题[跟tomcat6的做法一样],但是它的调度机制依然~Z灉|?/a>.
Grizzly:
for all selectedKeys
{
   [NIOContext---filterChain.execute--->our filter.execute]<------run In DefaultThreadPool
}
grizzly的DefaultThreadPool几乎重写了java util concurrent threadpool,q用自qLinkedTransferQueue,但同?a target="_blank" mce_>~Z灉|的池中线E的调度机制

下面分别是MINA,xSocket,Grizzly的源码分?
Apache MINA (mina-2.0.0-M6源码Z):
    我们使用mina nio tcp最常用的样例如?
        NioSocketAcceptor acceptor = new NioSocketAcceptor(/*NioProcessorPool's size*/);
        DefaultIoFilterChainBuilder chain = acceptor.getFilterChain();        
        //chain.addLast("codec", new ProtocolCodecFilter(
                //new TextLineCodecFactory()));
        ......
        // Bind
        acceptor.setHandler(/*our IoHandler*/);
        acceptor.bind(new InetSocketAddress(port));
------------------------------------------------------------------------------------
    首先从NioSocketAcceptor(extends AbstractPollingIoAcceptor)开?
bind(SocketAddress)--->bindInternal--->startupAcceptor:启动AbstractPollingIoAcceptor.Acceptor.run使用executor[Executor]的线E?注册[interestOps:SelectionKey.OP_ACCEPT],然后wakeup selector.
一旦有q接q来构建NioSocketSession--对应--channal,然后session.getProcessor().add(session)当前的channal加入到NioProcessor的selector中去[interestOps:SelectionKey.OP_READ],q样每个q接中有hq来q相应的NioProcessor来处?

q里有几点要说明的是:
1.一个NioSocketAcceptor对应了多个NioProcessor,比如NioSocketAcceptor׃用了SimpleIoProcessorPool DEFAULT_SIZE = Runtime.getRuntime().availableProcessors() + 1.当然q个size在new NioSocketAcceptor的时候可以设?
2.一个NioSocketAcceptor对应一个java nio selector[OP_ACCEPT],一个NioProcessor也对应一个java nio selector[OP_READ].
3.一个NioSocketAcceptor对应一个内部的AbstractPollingIoAcceptor.Acceptor---thread.
4.一个NioProcessor也对应一个内部的AbstractPollingIoProcessor.Processor---thread.
5.在new NioSocketAcceptor的时候如果你不提?strong>Executor(U程?的话,那么默认使用Executors.newCachedThreadPool().
q个Executor被NioSocketAcceptor和NioProcessor公用,也就是说上面的Acceptor---thread(一?和Processor---thread(多条)都是源于q个Executor.
      当一个连接java nio channal--NioSession被加?strong>ProcessorPool[i]--NioProcessor中去后就转入了AbstractPollingIoProcessor.Processor.run,
AbstractPollingIoProcessor.Processor.runҎ是运行在上面?strong>Executor中的一条线E中?当前的NioProcessor处理注册在它的selector上的所有连接的h[interestOps:SelectionKey.OP_READ].

AbstractPollingIoProcessor.Processor.run的主要执行流E?
for (;;) {      
       ......
       int selected = selector(final SELECT_TIMEOUT = 1000L);
       .......
       if (selected > 0) {
          process();
       }
       ......
}

process()-->for all session-channal:OP_READ -->read(session):q个readҎ是AbstractPollingIoProcessor.private void read(T session)Ҏ.
read(session)的主要执行流E是read channal-data to buf,if readBytes>0 then IoFilterChain.fireMessageReceived(buf)/*我们的IoHandler.messageReceived在其中被调?/strong>*/;
    到此mina Nio 处理h的流E已l明?
    mina处理h的线E模型也出来?性能问题也来?/strong>,那就是在AbstractPollingIoProcessor.Processor.run-->process-->read(per session)?在process的时候mina?strong>for all selected-channals 逐次read data再fireMessageReceived到我们的IoHandler.messageReceived?/strong>,而不是ƈ发处?/strong>,q样一来很明显后来的请求将?strong>延迟处理.
我们假设:如果NioProcessorPool's size=2 现在?00个客L同时q接q来,假设每个NioProcessor都注册了100个连?对于每个NioProcessor?strong>依次序
处理q?00个请?那么q其中的W?00个请求要得到处理,那它只有{到前面?9个被处理完了.
    有h提出了改q方?那就是在我们自己的IoHandler.messageReceived中利用线E池再进行分发dispatching,q个当然是个好主?
    但是hq是被gq处理了,因ؓq有read data所消耗的旉,q样W?00个请求它的数据要被读,p{前面的99个都被读完才?即便是增加ProcessorPool的尺怹不能解决q个问题.
    此外mina?strong>陷阱(q个词较旉)
也出来了,是?strong>read(session)?在说q个陷阱之前先说明一?我们的client端向server端发送一个消息体的时候不一定是完整的只发送一?可能分多ơ发?特别是在client端忙或要发送的消息体的长度较长的时?/strong>.而mina在这U情况下׃call我们的IoHandler.messageReceived多次,l果是消息体被分割了若q䆾,{于我们在IoHandler.messageReceived中每ơ处理的数据都是不完整的,q会D数据丢失,无效.
下面是read(session)的源?
private void read(T session) {
        IoSessionConfig config = session.getConfig();
        IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());

        final boolean hasFragmentation =
            session.getTransportMetadata().hasFragmentation();

        try {
            int readBytes = 0;
            int ret;

            try {
                if (hasFragmentation/*hasFragmentation一定ؓture,也许mina的开发h员也意识C传输数据的碎片问?但是靠下面的处理是远q不够的,因ؓclient一旦间隔发?ret可能ؓ0,退出while,不完整的readBytes被fire*/) {
                    while ((ret = read(session, buf)) > 0) {
                        readBytes += ret;
                        if (!buf.hasRemaining()) {
                            break;
                        }
                    }
                } else {
                    ret = read(session, buf);
                    if (ret > 0) {
                        readBytes = ret;
                    }
                }
            } finally {
                buf.flip();
            }

            if (readBytes > 0) {
                IoFilterChain filterChain = session.getFilterChain();
                filterChain.fireMessageReceived(buf);
                buf = null;

                if (hasFragmentation) {
                    if (readBytes << 1 < config.getReadBufferSize()) {
                        session.decreaseReadBufferSize();
                    } else if (readBytes == config.getReadBufferSize()) {
                        session.increaseReadBufferSize();
                    }
                }
            }
            if (ret < 0) {
                scheduleRemove(session);
            }
        } catch (Throwable e) {
            if (e instanceof IOException) {
                scheduleRemove(session);
            }
            IoFilterChain filterChain = session.getFilterChain();
            filterChain.fireExceptionCaught(e);
        }
    }
q个陷阱大家可以试一?看会不会一个完整的消息被多ơ发?你的IoHandler.messageReceived有没有被多次调用.
要保持我们应用程序消息体的完整性也很简单只需创徏一个断点breakpoint,然后set it to the current IoSession,一旦消息体数据完整dispatching it and remove it from the current session.
-------------------------------------------------------------------------------------------------- 
下面以xSocket v2_8_8源码Z:
tcp usage e.g:
IServer srv = new Server(8090, new EchoHandler());
srv.start() or run();
-----------------------------------------------------------------------
class EchoHandler implements IDataHandler {  
    public boolean onData(INonBlockingConnection nbc)
             throws IOException,
             BufferUnderflowException,
             MaxReadSizeExceededException {
       String data = nbc.readStringByDelimiter("\r\n");
       nbc.write(data + "\r\n");
       return true;
    }
  }
------------------------------------------------------------------------
说明1.Server:Acceptor:IDataHandler ------1:1:1
Server.run-->IoAcceptor.accept()在port上阻?一旦有channel׃IoSocketDispatcherPool中获取一个IoSocketDispatcher,同时构徏一个IoSocketHandler和NonBlockingConnection,调用Server.LifeCycleHandler.onConnectionAccepted(ioHandler)  initialize the IoSocketHandler.注意:IoSocketDispatcherPool.size默认?,也就是说只有2条do select的线E和相应?个IoSocketDispatcher.q个和MINA的NioProcessor数是一L.
说明2.IoSocketDispatcher[java nio Selector]:IoSocketHandler:NonBlockingConnection------1:1:1
在IoSocketDispatcher[对应一个Selector].run?-->IoSocketDispatcher.handleReadWriteKeys:
for all selectedKeys
{
    IoSocketHandler.onReadableEvent/onWriteableEvent.

IoSocketHandler.onReadableEvent的处理过E如?
1.readSocket();
2.NonBlockingConnection.IoHandlerCallback.onData
NonBlockingConnection.onData--->appendDataToReadBuffer: readQueue append data
3.NonBlockingConnection.IoHandlerCallback.onPostData
NonBlockingConnection.onPostData--->HandlerAdapter.onData[our dataHandler] performOnData in WorkerPool[threadpool]. 

因ؓ是把channel中的数据dreadQueue?应用E序的dataHandler.onData会被多次调用直到readQueue中的数据d为止.所以依然存在类似mina的陷?解决的方法依然类?因ؓq里有NonBlockingConnection.
----------------------------------------------------------------------------------------------
再下面以grizzly-nio-framework v1.9.18源码Z:
tcp usage e.g:
Controller sel = new Controller();
         sel.setProtocolChainInstanceHandler(new DefaultProtocolChainInstanceHandler(){
             public ProtocolChain poll() {
                 ProtocolChain protocolChain = protocolChains.poll();
                 if (protocolChain == null){
                     protocolChain = new DefaultProtocolChain();
                     //protocolChain.addFilter(our app's filter/*应用E序的处理从filter开?cMmina.ioHandler,xSocket.dataHandler*/);
                     //protocolChain.addFilter(new ReadFilter());
                 }
                 return protocolChain;
             }
         });
         //如果你不增加自己的SelectorHandler,Controller默认用TCPSelectorHandler port:18888
         sel.addSelectorHandler(our app's selectorHandler on special port);        
  sel.start();
------------------------------------------------------------------------------------------------------------
 说明1.Controller:ProtocolChain:Filter------1:1:n,Controller:SelectorHandler------1:n,
SelectorHandler[对应一个Selector]:SelectorHandlerRunner------1:1,
Controller. start()--->for per SelectorHandler start SelectorHandlerRunner to run.
SelectorHandlerRunner.run()--->selectorHandler.select()  then handleSelectedKeys:
for all selectedKeys
{
   NIOContext.execute:dispatching to threadpool for ProtocolChain.execute--->our filter.execute.

你会发现q里没有read data from channel的动?因ؓq将׃的filter来完?所以自然没有mina,xsocket它们的陷阱问?分发提前?但是你要注意SelectorHandler:Selector:SelectorHandlerRunner:Thread[SelectorHandlerRunner.run]都是1:1:1:1,也就是说只有一条线E在doSelect then handleSelectedKeys.

    相比之下虽然grizzly?strong>q发性能上更?但是?strong>易用?/strong>斚w却不如mina,xsocket,比如cMmina,xsocket中表C当前连接或会话的IoSession,INonBlockingConnection对象在grizzly中由NIOContext来负?但是NIOContextq没有提供session/connection lifecycle event,以及常规的read/write操作,q些都需要你自己L展SelectorHandler和ProtocolFilter,从另一个方面也可以说明grizzly的可扩展?灉|性更胜一{?

 



adapterofcoms 2010-03-05 09:37 发表评论
]]>
JavaU程池的瑕疵,For java util concurrent threadpool Since jdk1.5http://www.aygfsteel.com/adapterofcoms/articles/313482.htmladapterofcomsadapterofcomsSat, 20 Feb 2010 12:15:00 GMThttp://www.aygfsteel.com/adapterofcoms/articles/313482.htmlhttp://www.aygfsteel.com/adapterofcoms/comments/313482.htmlhttp://www.aygfsteel.com/adapterofcoms/articles/313482.html#Feedback0http://www.aygfsteel.com/adapterofcoms/comments/commentRss/313482.htmlhttp://www.aygfsteel.com/adapterofcoms/services/trackbacks/313482.html    java.util.concurrent的作者是Doug Lea : 世界上对Java影响力最大的个h,在jdk1.5之前大家一定熟悉他的backport-util-concurrent.jar."q个L挂着眼镜Q留着L威廉二世的胡子,怸永远挂着谦逊腼腆笑容,服务于纽U州立大学Oswego分校计算器科学系的老大爗?,他可是ƈ发编E的大师Uh物哦!
    Since jdk1.5,在java.util.concurrent包下的线E池模型是基于queue?threadpool只有一?而queue却有多个LinkedBlockingQueue,SynchronousQueue,ScheduledThreadPoolExecutor.DelayedWorkQueue{可参见java.util.concurrent.Executors.注意:我下面的问题是针对LinkedBlockingQueue?/strong>,参考的src为jdk1.6.
    Threadpool通过以下?个属性来标志池中的线E数:
corePoolSize(cMminimumPoolSize),poolSize(当前池中的线E数),maximumPoolSize(最大的U程?.
q?个属性表辄意思是每次新创建或l束一个线EpoolSize++/--,在最忙的情况下threadpool创徏的线E数不能过maximumPoolSize,
当空闲的情况下poolSize应该降到corePoolSize,当然threadpool如果从创建时它就从来没有处理q一ơ请求的?那么poolSize当然?.
    通过以上2D늚说明下面我要引出我所要讲的问?
我们来看一下java.util.concurrent.ThreadPoolExecutor的executeҎ:
public void execute(Runnable command) {
        if (command == null)
            throw new NullPointerException();
        if (poolSize >= corePoolSize || !addIfUnderCorePoolSize(command)) {
            if (runState == RUNNING && workQueue.offer(command)) {
                if (runState != RUNNING || poolSize == 0)
                    ensureQueuedTaskHandled(command);
            }
            else if (!addIfUnderMaximumPoolSize(command))
                reject(command); // is shutdown or saturated
        }
}
它表辄M意思是:如果当前的poolSize<corePoolSize,那么增加线E直到poolSize==corePoolSize.
如果poolSize已经到达corePoolSize,那么把command(task) put to workQueue,如果workQueue为LinkedBlockingQueue的话,
那么只有当workQueue offer commands辑ֈworkQueue.capacity?threadpool才会l箋增加U程直到maximumPoolSize.
1.*****如果LinkedBlockingQueue.capacity被设|ؓInteger.MAX_VALUE,那么池中的线E几乎不可能到达maximumPoolSize.*****
所以你如果使用了Executors.newFixedThreadPool的话,那么maximumPoolSize和corePoolSize是一Lq且LinkedBlockingQueue.capacity==Integer.MAX_VALUE,或者如果这样new ThreadPoolExecutor(corePoolSize,maximumPoolSize,keepAliveTime,timeUnit,new LinkedBlockingQueue<Runnable>(/*Integer.MAX_VALUE*/))的话,
上述的用都导致maximumPoolSize是无效的,也就是说U程池中的线E数不会出corePoolSize.
q个也让那些tomcat6的开发h员可能也郁闷?他们不得不改写LinkedBlockingQueue,以tomcat-6.0.20-srcZ:
org.apache.tomcat.util.net.NioEndpoint.TaskQueue extends LinkedBlockingQueue<Runnable> override offer method: 
 public void setParent(ThreadPoolExecutor tp, NioEndpoint ep) {
            parent = tp;
            this.endpoint = ep;
        }
       
        public boolean offer(Runnable o) {
            //we can't do any checks
            if (parent==null) return super.offer(o);
            //we are maxed out on threads, simply queue the object
            if (parent.getPoolSize() == parent.getMaximumPoolSize()) return super.offer(o);
            //we have idle threads, just add it to the queue
            //this is an approximation, so it could use some tuning
            if (endpoint.activeSocketProcessors.get()<(parent.getPoolSize())) return super.offer(o);
            //if we have less threads than maximum force creation of a new thread
            if (parent.getPoolSize()<parent.getMaximumPoolSize()) return false;
            //if we reached here, we need to add it to the queue
            return super.offer(o);
        } 

org.apache.tomcat.util.net.NioEndpoint.start()-->
   TaskQueue taskqueue = new TaskQueue();/***queue.capacity==Integer.MAX_VALUE***/
                     TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-");
                     executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60,TimeUnit.SECONDS,taskqueue, tf);
                     taskqueue.setParent( (ThreadPoolExecutor) executor, this);
2.*****如果把LinkedBlockingQueue.capacity讄Z个适当的D于Integer.MAX_VALUE,那么只有put到queue的Q务数到达LinkedBlockingQueue的capacity?才会l箋增加池中的线E?使得poolSize出corePoolSize但不过maximumPoolSize,q个时候来增加U程数是不是有点晚了??????*****.
q样一来reject(command)也可能随之而来?LinkedBlockingQueue.capacity讄Z值又是个头疼的问?
所以ThreadPoolExecutor+LinkedBlockingQueue表达的意思是首先会增加线E数到corePoolSize,但只有queue的Q务容量到达最大capacity?才会l箋在corePoolSize的基C增加U程来处理Q?直到maximumPoolSize.
    但ؓ什么我们不能这样呢:LinkedBlockingQueue.capacity讄为Integer.MAX_VALUE,让task可能的得到处理,同时在忙的情况下,增加池中的线E充到maximumPoolSize来尽快的处理q些d.即便是把LinkedBlockingQueue.capacity讄Z个适当的?lt;<<q小于Integer.MAX_VALUE,也不一定非得在d数到达LinkedBlockingQueue的capacity之后才去增加U程使poolSize出corePoolSize向maximumPoolSize.
    所以java util concurrent中的ThreadPoolExecutor+LinkedBlockingQueuel合的缺点也出来了:如果我们惌U程池尽可能多的处理大量的Q务的?我们会把LinkedBlockingQueue.capacity讄为Integer.MAX_VALUE,但是如果q样的话池中的线E数量就不能充到最大maximumPoolSize,也就不能充分发挥U程池的最大处理能?如果我们把LinkedBlockingQueue.capacity讄Z个较的?那么U程池中的线E数量会充到最大maximumPoolSize,但是如果池中的线E都忙的?U程池又会rejecth的Q?因ؓ队列已满.
    如果我们把LinkedBlockingQueue.capacity讄Z个较大的g不是Integer.MAX_VALUE,那么{到U程池的U程数量准备开始超出corePoolSize?也就是Q务队列满?q个时候才d加线E的?hd的执行会有一定的延时,也就是没有得到及时的处理.
    其实也就是说ThreadPoolExecutor~Z灉|的线E调度机?没有Ҏ当前d的执行情?是忙,q是?以及队列中的待处理Q务的数量U进行动态的调配U程?使得它的处理效率受到影响.
那么什么是忙的情况的判断呢? 
busy[1]:如果poolSize==corePoolSize,q且现在忙着执行d的线E数(currentBusyWorkers){于poolSize.[而不现在put到queue的Q务数是否到达queue.capacity]
busy[2].1:如果poolSize==corePoolSize,q且put到queue的Q务数已到达queue.capacity.[queue.capacity是针Ҏd队列极限限制的情况]
busy[2].2:U程池的基本目标是尽可能的快速处理大量的hd,那么׃一定非得在put到queue的Q务数到达queue的capacity之后才判断ؓ忙的情况,只要queue中现有的d?task_counter)与poolSize或者maximumPoolSize存在一定的比例时就可以判断为忙?比如task_counter>=poolSize或者maximumPoolSize?NumberOfProcessor+1)?q样queue.capacityq个限制可以取消?
在上qbusy[1],busy[2]q?U情况下都应增加U程?直至maximumPoolSize,使请求的d得到最快的处理.

前面讲的是忙的时候ThreadPoolExecutor+LinkedBlockingQueue在处理上的瑕?那么I闲的时候又要如何呢?
如果corePoolSize<poolSize<maximumPoolSize,那么U程{待keepAliveTime之后应该降ؓcorePoolSize,嘿嘿,q个q的成了bug了哦,一个很隑֏现的bug,poolSize是被降下来了,可是很可能降q了?lt;corePoolSize,甚至降ؓ0也有可能.
ThreadPoolExecutor.Worker.run()-->ThreadPoolExecutor.getTask():
Runnable getTask() {
        for (;;) {
            try {
                int state = runState;
                if (state > SHUTDOWN)
                    return null;
                Runnable r;
                if (state == SHUTDOWN)  // Help drain queue
                    r = workQueue.poll();
                else if (poolSize > corePoolSize || allowCoreThreadTimeOut)
      /*queue is empty,q里timeout之后,return null,之后call workerCanExit() return true.*/
                    r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);
                else
                    r = workQueue.take();
                if (r != null)
                    return r;
                if (workerCanExit()) {
                    if (runState >= SHUTDOWN) // Wake up others
                        interruptIdleWorkers();
                    return null;
                }
                // Else retry
            } catch (InterruptedException ie) {
                // On interruption, re-check runState
            }
        }
}//end getTask.
private boolean workerCanExit() {
        final ReentrantLock mainLock = this.mainLock;
        mainLock.lock();
        boolean canExit;
        try {
            canExit = runState >= STOP ||
                workQueue.isEmpty() ||
                (allowCoreThreadTimeOut &&
                 poolSize > Math.max(1, corePoolSize));
        } finally {
            mainLock.unlock();
        }
        return canExit;
}//end workerCanExit.

在workerCanExit() return true之后,poolSize仍然大于corePoolSize,pooSize的值没有变?
ThreadPoolExecutor.Worker.run()结?->ThreadPoolExecutor.Worker.workerDone-->q个时候才poolSize--,可惜晚了,在多U程的环境下,poolSize的值将变ؓ于corePoolSize,而不是等于corePoolSize!!!!!!
例如:如果poolSize(6)大于corePoolSize(5),那么同时timeout的就不一定是一条线E?而是多条,它们都有可能退出run,使得poolSize--减过了corePoolSize.
    提一下java.util.concurrent.ThreadPoolExecutor的allowCoreThreadTimeOutҎ, @since 1.6 public void allowCoreThreadTimeOut(boolean value);
它表辄意思是在空闲的时候让U程{待keepAliveTime,timeout后得poolSize能够降ؓ0.[其实我是希望它降为minimumPoolSize,特别是在服务器的环境?我们需要线E池保持一定数量的U程来及时处?雉碎?断断l箋?一股一波的,不是很有压力?h],当然你可以把corePoolSize当作minimumPoolSize,而不调用该方?
    针对上述java util concurrentU程池的瑕疵,我对java util concurrentU程池模型进行了修正,特别是在"?(busy[1],busy[2])的情况下的Q务处理进行了优化,使得U程池尽可能快的处理可能多的Q?
下面提供了高效的U程池的源码购买:
java版threadpool:
http://item.taobao.com/auction/item_detail-0db2-9078a9045826f273dcea80aa490f1a8b.jhtml
c [not c++]版threadpool in windows NT:
http://item.taobao.com/auction/item_detail-0db2-28e37cb6776a1bc526ef5a27aa411e71.jhtml



adapterofcoms 2010-02-20 20:15 发表评论
]]>
DWR在和spring集成时的bug,SpringCreator.getType???http://www.aygfsteel.com/adapterofcoms/articles/312495.htmladapterofcomsadapterofcomsWed, 10 Feb 2010 04:04:00 GMThttp://www.aygfsteel.com/adapterofcoms/articles/312495.htmlhttp://www.aygfsteel.com/adapterofcoms/comments/312495.htmlhttp://www.aygfsteel.com/adapterofcoms/articles/312495.html#Feedback0http://www.aygfsteel.com/adapterofcoms/comments/commentRss/312495.htmlhttp://www.aygfsteel.com/adapterofcoms/services/trackbacks/312495.html
public Class<?> getType()
{
if (clazz == null)
{
try
{
clazz = getInstance().getClass();
}
catch (InstantiationException ex)
{
log.error("Failed to instansiate object to detect type.", ex);
return Object.class;
}
}

return clazz;
}

我们再来看看它的getInstance,最l由spring来创建实?

public Object getInstance() throws InstantiationException
{
try
{
if (overrideFactory != null)
{
return overrideFactory.getBean(beanName);
}

if (factory == null)
{
factory = getBeanFactory();
}

if (factory == null)
{
log.error("DWR can't find a spring config. See following info logs for solutions");
log.info("- Option 1. In dwr.xml, <create creator='spring' ...> add
log.info("- Option 2. Use a spring org.springframework.web.context.ContextLoaderListener.");
log.info("- Option 3. Call SpringCreator.setOverrideBeanFactory() from your web-app");
throw new InstantiationException("DWR can't find a spring config. See the logs for solutions");
}

return factory.getBean(beanName);
}
catch (InstantiationException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new InstantiationException("Illegal Access to default constructor on " + clazz.getName() + " due to: " + ex);
}
}

getInstance返回由spring来创建的实例,很明显SpringCreator.getType有点多此一?它先创徏了实?再从实例的getClass获取对象的类?而spring的beanFactory.getType同样有此功能,但它不需要先创徏实例.

也许写这位代码的仁兄是不知道spring beanFactory.getTypeq个Ҏ?


我把SpringCreator.getTypeҎ后的代码 如下:

public Class<?> getType()
{
if (clazz == null)
{
try
{
if(overrideFactory != null){
clazz=overrideFactory.getType(beanName);
}else {
if(factory == null)
factory = getBeanFactory();
clazz=factory.getType(beanName);
}

}
catch (Exception ex)
{
log.error("Failed to detect type.", ex);
return Object.class;
}
}

return clazz;
}

如果出现 Error loading class for creator ...... 那么׃改SpringCreator?


adapterofcoms 2010-02-10 12:04 发表评论
]]>
览器[IE,Firefox]不支持comet技?AJAX不能支持服务端推消息http://www.aygfsteel.com/adapterofcoms/articles/311551.htmladapterofcomsadapterofcomsMon, 01 Feb 2010 12:43:00 GMThttp://www.aygfsteel.com/adapterofcoms/articles/311551.htmlhttp://www.aygfsteel.com/adapterofcoms/comments/311551.htmlhttp://www.aygfsteel.com/adapterofcoms/articles/311551.html#Feedback0http://www.aygfsteel.com/adapterofcoms/comments/commentRss/311551.htmlhttp://www.aygfsteel.com/adapterofcoms/services/trackbacks/311551.htmlcomet技?服务端向客户端主动推消息的技?但侧重基于http的协?如果是socket则不存在q个问题.

从tomcat6开?增加了org.apache.catalina.CometProcessor接口来实现对comet技术的支持.
修改conf/server.xml 

<Connector port="8080" protocol="HTTP/1.1"-改ؓ->"org.apache.coyote.http11.Http11NioProtocol"
java:请参看tomcat.apache.org上的CometServlet的例?
import javax.servlet.http.HttpServlet;
import org.apache.catalina.CometEvent;
import org.apache.catalina.CometProcessor;

CometServlet extends HttpServlet implements CometProcessor

javascript:

function installComet(){  
 var xmlReq = window.ActiveXObject ? new ActiveXObject("Microsoft.XMLHTTP") : new XMLHttpRequest();
 xmlReq.onreadystatechange = handler;
 xmlReq.open("GET", "/yourapp/comet",true);
 xmlReq.send();
}
function handler(){
 try{
  if(xmlReq.readyState){  
   if(xmlReq.readyState>=3){   
    alert(xmlReq.responseText);
   }
  }
 }catch(e){  
  alert(xmlReq.readyState+":e->:"+e.message);
 } 
}

    在IE览器各个版本中handler只会被回调一ơ而不服务端针对此次q接发多次消息,此时的readyState?
对responseText的操作会引发javascript error:完成该操作所需的数据还不可使用?/p>

    在Firefox中handler会被多次调用,但responseText会缓存前一ơ的消息而不会清?responseText的数据会随着服务端消息的到达而篏U?

    到目前ؓ?览器只能通过插g的方式来实现对comet技术在客户端的支持,所以流行的flash player,ActionScript成Z首?
ActionScript通过socket来徏立长q接.

    所以那些AJAX框架都不能真正的支持comet,而只能通过poll,setTimeout/setInterval,
而dwr的ReverseAjax正是使用了setTimeout来poll轮询服务端的,请参看dwr的engine.js的源?



adapterofcoms 2010-02-01 20:43 发表评论
]]>
վ֩ģ壺 | | μԴ| | | ɣ| | | | | | | ɽ| ˮ| | | ˶| ƽ| | | ɳƺ| | | | ƽ| ϴ| | Ԫ| ˮ| | | | ͬ| | | ԰| | | | | Ұ|