目的Q快速开发高性能、高可靠性的|络服务器和客户端程?/p>
优点Q提供异步的、事仉动的|络应用E序框架和工?/p>
通俗的说Q一个好使的处理Socket的东?/p>
如果没有NettyQ?/p>
q古Qjava.net + java.io
q代Qjava.nio
其他QMinaQGrizzly
Z么不是MinaQ?/p>
1、都是Trustin Lee的作品,Netty更晚Q?/p>
2、Mina内核和一些特性的联系q于紧密Q得用户在不需要这些特性的时候无法脱,相比下性能会有所下降QNetty解决了这个设计问题;
3、Netty的文档更清晰Q很多Mina的特性在Netty里都有;
4、Netty更新周期更短Q新版本的发布比较快Q?/p>
5、它们的架构差别不大QMina靠apache生存Q而Netty靠jbossQ和jboss的结合度非常高,Netty有对google protocal buf的支持,有更完整的ioc容器支持(spring,guice,jbossmc和osgi)Q?/p>
6、Netty比Mina使用h更简单,Netty里你可以自定义的处理upstream events ??downstream eventsQ可以用decoder和encoder来解码和~码发送内容;
7、Netty和Mina在处理UDP时有一些不同,NettyUDP无连接的Ҏ暴露出来;而Mina对UDPq行了高U层ơ的抽象Q可以把UDP当成"面向q接"的协议,而要Netty做到q一Ҏ较困难?/p>
Netty的特?/p>
设计
l一的APIQ适用于不同的协议Q阻塞和非阻塞)
Z灉|、可扩展的事仉动模?/p>
高度可定制的U程模型
可靠的无q接数据Socket支持QUDPQ?/p>
性能
更好的吞吐量Q低延迟
更省资源
量减少不必要的内存拯
安全
完整的SSL/TLS和STARTTLS的支?/p>
能在Applet与Android的限制环境运行良?/p>
健壮?/p>
不再因过快、过慢或负载连接导致OutOfMemoryError
不再有在高速网l环境下NIOd频率不一致的问题
易用
完善的JavaDocQ用h南和样例
z简?/p>
仅信赖于JDK1.5
看例子吧Q?/p>
Server端:
客户端:
Netty整体架构
Nettylg
ChannelFactory
Boss
Worker
Channel
ChannelEvent
Pipeline
ChannelContext
Handler
Sink
Server端核心类
NioServerSocketChannelFactory
NioServerBossPool
NioWorkerPool
NioServerBoss
NioWorker
NioServerSocketChannel
NioAcceptedSocketChannel
DefaultChannelPipeline
NioServerSocketPipelineSink
Channels
ChannelFactory
Channel工厂Q很重要的类
保存启动的相兛_?/p>
NioServerSocketChannelFactory
NioClientSocketChannelFactory
NioDatagramChannelFactory
q是Nio的,q有Oio和Local?/p>
SelectorPool
Selector的线E池
NioServerBossPool 默认U程敎ͼ1
NioClientBossPool 1
NioWorkerPool 2 * Processor
NioDatagramWorkerPool
Selector
选择器,很核心的lg
NioServerBoss
NioClientBoss
NioWorker
NioDatagramWorker
Channel
通道
NioServerSocketChannel
NioClientSocketChannel
NioAcceptedSocketChannel
NioDatagramChannel
Sink
负责和底层的交互
如bindQWriteQClose{?/p>
NioServerSocketPipelineSink
NioClientSocketPipelineSink
NioDatagramPipelineSink
Pipeline
负责l护所有的Handler
ChannelContext
一个Channel一个,是Handler和Pipeline的中间g
Handler
对Channel事g的处理器
ChannelPipeline
优秀的设?---事g驱动
优秀的设?---U程模型
注意事项
解码时的Position
Channel的关?/p>
更多Handler
Channel的关?/p>
用完的ChannelQ可以直接关闭;
1、ChannelFuture加Listener
2、writeComplete
一D|间没用,也可以关?/p>
TimeoutHandler
ChannelEvent
object. For each read / write, it additionally created a new ChannelBuffer
. It simplified the internals of Netty quite a lot because it delegates resource management and buffer pooling to the JVM. However, it often was the root cause of GC pressure and uncertainty which are sometimes observed in a Netty-based application under high load.4.0 removes event object creation almost completely by replacing the event objects with strongly typed method invocations. 3.x had catch-all event handler methods such as handleUpstream()
andhandleDownstream()
, but this is not the case anymore. Every event type has its own handler method now:
Separation of concerns: The Reactor pattern decouples application-independent demultiplexing and dispatching mechanisms from application-specific hook method functionality. The application-independent mechanisms become reusable components that know how to demultiplex events and dispatch the appropriate hook methods defined byEvent Handlers. In contrast, the application-specific functionality in a hook method knows how to perform a particular type of service.
Improve modularity, reusability, and configurability of event-driven applications: The pattern decouples application functionality into separate classes. For instance, there are two separate classes in the logging server: one for establishing connections and another for receiving and processing logging records. This decoupling enables the reuse of the connection establishment class for different types of connection-oriented services (such as file transfer, remote login, and video-on-demand). Therefore, modifying or extending the functionality of the logging server only affects the implementation of the logging handler class.
Improves application portability: The Initiation Dispatcher’s interface can be reused independently of the OS system calls that perform event demultiplexing. These system calls detect and report the occurrence of one or more events that may occur simultaneously on multiple sources of events. Common sources of events may in- clude I/O handles, timers, and synchronization objects. On UNIX platforms, the event demultiplexing system calls are calledselect and poll [1]. In the Win32 API [16], the WaitForMultipleObjects system call performs event demultiplexing.
Provides coarse-grained concurrency control: The Reactor pattern serializes the invocation of event handlers at the level of event demultiplexing and dispatching within a process or thread. Serialization at the Initiation Dispatcher level often eliminates the need for more complicated synchronization or locking within an application process.
Efficiency: Threading may lead to poor performance due to context switching, synchronization, and data movement [2];
Programming simplicity: Threading may require complex concurrency control schemes;