from:http://zeroturnaround.com/rebellabs/5-command-line-tools-you-should-be-using/

          Working on the command line will make you more productive, even on Windows!

          There’s an age-old discussion between the usability and friendliness of GUI programs, versus the simplicity and productivity of CLI ones. But this is not a holy war I intend to trigger or fuel. In the past, RebelLabs has discussed built-in JDK tools and received amazing feedback, so I feel an urge to share more non-JDK command line tools which I simply couldn’t live without.

          I do firmly believe every developer who’s worth their salt should have at least some notion of how to work with the command line, if only because some tools only exist in CLI variants. Plus, because geek++!

          All other nuances that people pour words over, like the choice of operating system (OSX of course, they have beautiful aluminum cases), your favorite shell (really it should be ZSH), or the preference of Vim over Emacs (unless you have more fingers than usual) are much less relevant. OK, that was a little flamewar-like, but I promise that will be the last of it!

          So my advice would be that you should learn how to use tools at the command line, as it will have a positive impact on your happiness and productivity at least for half a century!

          Anyway, in this post I want to share with you four-five lesser-known yet pretty awesome command line gems. As an added bonus I will also advise the proper way to use shell under Windows, which is a pretty valuable bit of knowledge in itself.

          The reason I wanted to write this post in the first place is because I really enjoy using these tools myself, and want to learn about other command line tools that I don’t yet know about. So please, awesome reader, leave me a comment with your favourite CLI tools — that’d be grand! Now, assuming we all have a nice, workable shell, let’s go over some neat command line tools that are worth hearing about.

          0. HTTPie

           

          The first on my list is a tool called HTTPie. Fear not, this tool has nothing to do with Internet Explorer, fortunately. In essence HTTPie is a cURL wrapper, the utility that performs HTTP requests from the command line. HTTPie adds nice features like auto-formatting and intelligent colour highlighting to the output making it much more readable and useful to the user. Additionally, it takes a very human-centric approach to its execution, not asking you to remember obscure flags and options. To perform an HTTP GET, you simply run http, to post you http POST, what can be easier or more beautiful?

          sample httpie output

          Almost all command line tools are conveniently packaged for installation. HTTPie is no exception, to install it, run the following command.

          • On OSX use homebrew, the best package manager to be found on OSX: brew install httpie
          • All other platforms, using Python’s pip: pip install --upgrade httpie

          I personally use HTTPie a lot when developing a REST API, as it allows me to very simply query the API, returning nicely structured, legible data. Without doubt this tool saves me serious work and frustration. Luckily the usage does not stop at just REST APIs. Generally speaking, all interactions over HTTP, whether it’s inputting or outputting data, can be done in a very human-readable format.

          I’d encourage you to take a look at the website, spend the 10 seconds it takes to install and give it a go yourself. Try to get the source of any website and be amazed by the output.

          How unstoppable you can be with proper tools

          Protip: Combine the HTTPie greatness with jq for command line JSON manipulation or pup for HTML parsing and you’ll be unstoppable!

          1. Icdiff

           

          At ZeroTurnaround I am blessed to work with Mercurial, a very nice and easy to use VCS. On OSX the excellent GUI program SourceTree makes working with Mercurial an absolute breeze, even with the more complex stuff. Unfortunately I like to keep the number of programs/tabs/windows I have open to an absolute minimum. Since I always have a terminal window opened it makes sense to use the CLI.

          All was fine and well apart from one single pitfall in my setup. This was a feature I could barely go without: side-by-side diffs. Introducing icdiff. Of all the tools I use each day, this is the one I most appreciate. Let’s take a look at a screenshot:

          example of icdiff at work

          By itself, icdiff is an intelligent Python script, smart at detecting which of the differences are modifications, additions or deletions. The excellent colour highlighting in the tool makes it easy to distinguish between the three types of differences mentioned.

          To get going with icdiff, do the following:

          • Via homebrew once again: brew install icdiff
          • Manually grab the Python script from the site above and put it in your PATH

          When you couple icdiff with a VCS such as Mercurial, you’ll see it really shine. To fully integrate it, you’ll need to complete two more configuration steps, already documented here. The gist of the instructions is to first add a wrapping script that allows the one-by-one file diff of icdiff to operate on entire directories. Secondly you need to config your VCS to actually use icdiff. The link above shows the details of configuring it for Mercurial, but porting this to Git shouldn’t be so hard.

          2. Pandoc

           

          In the spirit of “practice what you preach” I set out to write this entire blogpost via a CLI. Most of the work was done using MacVim, in iTerm2 on OSX. All of the text was written and formatted using standard MarkDown syntax. The only issue to arise here is that it’s pretty difficult sometimes to accurately guess how your eventual text will come out.

          This is where the next tool comes in: Pandoc. A program so powerful and versatile it’s a wonder it was GPL’d in the first place. Let’s take a look at how we might use it.

          pandoc -f markdown -t html blogpost.md > blogpost.html 

          Think of a markup format, any markup format. The chances are, Pandoc can convert it from one format to any other. For example, I’m writing this blogpost in Vim and use Pandoc to convert it from MarkDown into HTML, to actually see the final result. It’s a nice thing, needing only my terminal and a browser, rather than being tied to a particular online platform, fully standalone and offline.

          Don’t let yourself be limited by simple formats like MarkDown though, give it some docx files, or perhaps some LaTeX. Export into PDFepub, let it handle and format your citations. The possibilities are endless.

          Once again brew install pandoc does the trick. Did I mention I really like Homebrew? Maybe that should have made my tool list! Anyway, you get the gist of what that does!

          3. Moreutils

           

          The next tool in this post is actually a collection of nifty tools that didn’t make it into coreutils:Moreutils. It should be obtainable under moreutils in about any distro you can think of. OSX users can get all this goodness by brewing it like I did throughout this post:

          brew install moreutils 

          Here are a list of the included programs with short descriptions:

          • chronic: runs a command quietly unless it fails
          • combine: combine the lines in two files using boolean operations
          • ifdata: get network interface info without parsing ifconfig output
          • ifne: run a program if the standard input is not empty
          • isutf8: check if a file or standard input is utf-8
          • lckdo: execute a program with a lock held
          • mispipe: pipe two commands, returning the exit status of the first
          • parallel: run multiple jobs at once
          • pee: tee standard input to pipes
          • sponge: soak up standard input and write to a file
          • ts: timestamp standard input
          • vidir: edit a directory in your text editor
          • vipe: insert a text editor into a pipe
          • zrun: automatically uncompress arguments to command

          As the maintainer hints himself sponge is perhaps the most useful tool, in that you can easily sponge up standard input into a file. However, it is not difficult to see the advantages of some of the other commands such as chronicparallel and pee.

          My personal favourite though, and the ultimate reason to include this collection, is without doubtvipe.

          You can literally intercept your data as it moves from command to command through the pipe. Even though this is not a useful tool in your scripts, it can be extremely helpful when running commands interactively. Instead of giving you a useful example I will leave you with a modified fortune!

          sample vipe command

          4. Babun

           

          These days the Windows OS comes packaged with two different shells: its classic command line, and PowerShell. Let’s completely ignore those and have a look at the proper way or running command line tools under Windows: Babun! The reason this project is amazingly awesome is because it actually brings all the goodness of the \*NIX command line into Windows in a completely pre-configured no-nonsense manner.

          Moreover, its default shell is my beloved ZSH, though it can very easily be changed to use Bash, if that’s your cup of tea. With ZSH it also packages the highly popular oh-my-zsh framework, which combines all the benefits of ZSH with no config whatsoever thanks to some very sane defaults and an impressive plugin system.

          By default Babun is loaded with more applications than any sane developer may ever need, and is thus a rather solid 728 MBs(!) when expanded. In return you get essentials like Vim pre-installed and ready to go!

          screenshot of babun termina;

          Under the hood Babun is basically a fancy wrapper around Cygwin. If you already have a Cygwin install you can seamlessly re-use that one. Otherwise it will default to its own packaged Cygwin binaries, and supply you with access to those.

          Some more points of interest are that Babun provides its own package manager, which again wraps around Cygwin’s, and an update mechanism both for itself and for oh-my-zsh. The best thing is that no actual installation is required, nor is the usual requirement of admin rights necessary, so for those people on a locked down PC this may be just the thing they need!


          I hope this small selection of tools gave you at least one new cool toy to play with. As for me, it seems it is time to look at command line browsers before writing a following blogpost, to fully ditch the world of the GUI!

          By all means fire up any comments or suggestions that you have, and let’s get some tool-sharing going on. If you just want to chat just ping RebelLabs on Twitter: @ZeroTurnaround, they are pretty chatty, but great smart people.

          posted @ 2016-04-06 14:49 小馬歌 閱讀(288) | 評論 (0)編輯 收藏
           
               摘要: 本文由 ImportNew - hejiani 翻譯自 java-performance。歡迎加入翻譯小組。轉載請見文末要求。JMH是新的microbenchmark(微基準測試)框架(2013年首次發布)。與其他眾多框架相比它的特色優勢在于,它是由Oracle實現JIT的相同人員開發的。特別是我想提一下Aleksey Shipilev和他優秀的博...  閱讀全文
          posted @ 2016-04-06 14:19 小馬歌 閱讀(425) | 評論 (0)編輯 收藏
           
               摘要: 花了一下午時間,總算全部搞定。時間主要都花費在下載jar包上,雖然開了VPN還是下載慢,沒有VPN的話,真心要奔潰的。這期間有太多坑了,所以寫這篇文章, 一是記錄下,二是方便大家查閱。本人的系統環境為什么要說系統環境呢?不同的環境有不同的設置方法,但看了這篇文章后,可以舉一反三,在其他環境設置也沒什么問題。OS: OS X EI Capitan 10.11IDE: IntelliJ IDEA 14...  閱讀全文
          posted @ 2016-04-06 10:11 小馬歌 閱讀(1980) | 評論 (0)編輯 收藏
           
          http://zeroturnaround.com/rebellabs/monadic-futures-in-java8/

          Few people will argue that asynchronous computation is cool and useful. In fact, the wholereactive programming idea is based on asynchronous computations being possible. Well, there’s more than that, but the core idea is to allow data and events to flow through your system and do something with the results when they become available.

          So let’s look at an example of asynchronous function that everyone has seen and many have written themselves.

          $("#book").fadeIn("slow",    function() {    console.log(“hurray”);   }); 

          This piece of JavaScript code takes a book element and fades it in. When fading is complete a callback function is called and “hurray” string appears in the console. All is well and good in this trivial case, but once your system grows you can find yourself writing more and more of these nested callbacks.

          Callbacks are a common way of dealing with asynchronous or delayed actions. They are not the best option though; the problem with callbacks is that they tend to chain forever, callbacks for callbacks for callbacks, until you find yourself in a complete mess and every change in the code becomes extremely painful and slow.

          Maybe there are other ways to organize asynchronous code? In fact, there are: all you need to do is just tweak the perspective a bit. Imagine, if you had a type to represent a result of an async computation. It would be awesome, and your code would pass it around like every other value and be flat, fluid and readable.

          Well, why don’t we build it!

          When we’re done, we’ll have a monadic type Promise written in Java 8 that will make our asynchronous code wonderful. It’s not like it wasn’t ever done before, but I want to lead you through the process and help you understand what’s happening and why. If you are lazy or just prefer starting from code, check out the github repo.

          Getting to love monads in 9.5 minutes

          Oh, monads! Every programmer worth their morning coffee has written about them. Monads are what functional programming adepts love, use and praise. And there are thousands of tutorials and posts describing the concept.

          So if you know everything there is to know about monads and want to get a closer look onto more interesting things, scroll down to the code below. Otherwise, bear with me just ten minutes, maybe this will become your go-to explanation about what a monad is.

          A monad is a type, that represents a context of computation. I bet you’ve heard that before, but have you thought about what it means?

          First of all, a monad doesn’t specify what is happening, that’s the responsibility of the computation within the context. A monad says what surrounds the computation that is happening.

          Now, if you want an image reference to help you out, you can think of a monad as a bubble. Some people prefer a box, but a box is something concrete so a bubble works better for me.
          A lovely bubble with a cure dragon-ish creature inside
          These monad-bubbles have two properties:

          • a bubble can surround something
          • a bubble can receive instructions about what should it do with a surrounded thing

          The surrounding part is easy to model in a programming language. Just take something and return a bubble! A constructor or a factory method comes to mind immediately here. Let’s look at how it is formalized. I’m assuming that you have some knowledge of Haskell notation (which you probably should have anyway). So the function that takes something and returns a monad is usually called pure or return:

          return :: a -> m a 

          Or in Java, if we can have some Monad class already.

          public class Monad<T> {    public Monad(T t) {     …   } }

          See that was easy. In fact, we’re halfway there. Another thing we must add is the ability to receive instructions for working with this value T eaten by our bubble.

          What will help us is a bind function, which takes some form of an action and returns a different monad bubble that wraps this action executed on whatever was previously in the bubble.

          For the sake of completeness, here is how it looks in Haskell.

          (>>=)  :: m a -> (a -> m b) -> m b 

          So this bind function takes a monad over a(m a) and a function from a, and returns a different monad. In Java, we’ll have this definition as follows.

          public class Monad<T> {    public abstract <V> Monad<V> bind(Function<T, Monad<V>> f); }

          That will complete our generic definition of monads so we can proceed with an implementation.

          Wait, what? I can have my monads in Java?

          First of all, there are many different types of monads. In that sense, a monad is more like an interface in Java terms. There is a List monad, a Maybe monad, an IO monad (for languages that are very pure and cannot allow themselves to have normal IO), etc.

          We will focus on creating a specific monad in Java, more specifically in Java 8. There is a good reason as to why we chose Java 8, since previously we found out that a monad will have to manipulate functions, which is really not that enjoyable in pre-lambda versions of Java.  However, Java 8 introduces lambdas and first-class methods, so it will be much more pleasant to work with them.

          Your homemade Promise implementation

          Here we go, now we’ve established our goal to have a monadic type to represent async computations. We’ve got our tools, namely Java 8, and we are ready to hack.

          What we want to have is a Promise class that represents a result of asynchronous computation, either successful or erroneous.

          Let’s pretend that we already have some Promise class that accepts callbacks to execute when the main computation is finished. Luckily, we don’t have to pretend very much, there are many implementations of that available: Akka’s FuturePlay’s promise and so forth.

          For this post I’m using the one from Play Framework, in which case instances of Promise get redeemed when some thread calls invoke() or invokeWithException() methods. It also accepts callbacks in a form of Play’s Promise specific Action class arguments. Obviously, Promise has constructors already, but not only do I want to create new instances of Promise, I also want to mark them completed with a value immediately. Here is how I can do it.

          public static <V> Promise<V> pure(final V v) {     Promise<V> p = new Promise<>();     p.invoke(v);     return p;   }

          The returned Promise is already redeemed and is ready to provide us with a result of the computation, which is precisely the given value.

          The bind implementation will look like something below. It takes a function and adds that as a callback to this instance. A callback will get a result of this computation and apply given function to it. Whatever that function application returns or throws is used to redeem the resultingPromise.

          public <R> Promise<R> bind(final Function<V, Promise<R>> function) {     Promise<R> result = new Promise<>();      this.onRedeem(callback -> {       try {         V v = callback.get();         Promise<R> applicationResult = function.apply(v);         applicationResult.onRedeem(applicationCallback -> {           try {             R r = applicationCallback.get();             result.invoke(r);           }           catch (Throwable e) {             result.invokeWithException(e);           }         });       }       catch (Throwable e) {         result.invokeWithException(e);       }     });     return result;   } 

          Both applying the given function and getting a result from this are wrapped into the try-catch blocks, so exceptions are propagated to the resulting instance of Promise, just as one might expect.

          With these two constructs, it’s very easy to chain asynchronous computations while avoiding going deeper and deeper into the callback hell. In the following synthetic example, we do exactly that.

          public static void example1()                      throws ExecutionException, InterruptedException {     Promise<String> promise = Async.submit(() -> {       String helloWorld = "hello world";       long n = 500;       System.out.println("Sleeping " + n + " ms example1");       Thread.sleep(n);       return helloWorld;     });     Promise<Integer> promise2 = promise.bind(string ->               Promise.pure(Integer.valueOf(string.hashCode())));     System.out.println("Main thread example2");     int hashCode = promise2.get();     System.out.println("HashCode = " + hashCode);   }

          That is basically it. We’ve implemented a monadic type Promise to represent a result of an async action.

          Production-ready completable future

          For those of you who have beared with me this far, I just want to say some final words about the quality of this implementation. Naturally, the above-mentioned GitHub repository has some tests that are proving that in some contexts, this might all work. However, I wouldn’t recommend using those Promises in production.

          One reason for that is that Java 8 already contains a class that represents a result of async computation and is monadic…, welcome, CompletableFuture!

          It does exactly what we want it to do and features several methods that allow you to bind a function to the result of an existing computation. Moreover, it provides methods to apply a function or a consumer, which is a void function by the way, or a plain old Runnable.

          On top of that, methods that end on *Async will execute this function asynchronously using a common ForkJoin executor. Otherwise, you can also supply an executor of your own liking.

          Conclusion

          Hopefully, this post shed some light onto what a monad is and next time you are about write a callback, you might want to consider a different approach.

          In the post above we’ve looked at what monads are and how one can implement monadic classes in Java 8. Monads are great help in organizing data flow through your code and we’ve shown it with an example of Promise monad that represents results of an asynchronous computation. All the code from this blogpost is available for pondering in the Github repo.

          Stay tuned for my next post, in which I plan to cover how to use the javaflow library to implement asynchronous awaiting for the promise to return a result. So you can get even more reactive :-)


          Want to learn more about what rocks in Java 8? Check out Java 8 Revealed: Lambdas, Default methods and Bulk Data Operations by Anton Arhipov

          Get the PDF

          posted @ 2016-04-05 17:51 小馬歌 閱讀(206) | 評論 (0)編輯 收藏
           
               摘要: from:https://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool-and-callback-hellUnder the hood, LinkedIn consists of hundreds of services that can be evolved and scaled indepen...  閱讀全文
          posted @ 2016-04-05 17:48 小馬歌 閱讀(429) | 評論 (0)編輯 收藏
           
          http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html
          posted @ 2016-04-05 17:47 小馬歌 閱讀(171) | 評論 (0)編輯 收藏
           
          from:http://mmcgrana.github.io/2010/07/threaded-vs-evented-servers.html

          Threaded vs Evented Servers

          July 24 2010

          Broadly speaking, there are two ways to handle concurrent requests to a server. Threadedservers use multiple concurrently-executing threads that each handle one client request, while evented servers run a single event loop that handles events for all connected clients.

          To chose between the threaded and evented approaches you need to consider the load profile of the server. This post describes a simple mathematical model for reasoning about these load profiles and their implications for server design.

          Suppose that requests to a server take c CPU milliseconds and w wall clock milliseconds to execute. The CPU time is spent actively computing on behalf of the request, while the wall clock time is the total time including that time spent waiting for calls to external resources. For example, a web application request might take 5 ms of CPU time c and 95 ms waiting for a database call for a total wall time w of 100 ms. Let’s also say that a threaded version of the server can maintain up to t threads before performance degrades because of scheduling and context-switching overhead. Finally, we’ll assume single-core servers.

          If a server is CPU bound then it will be able to respond to at most

          (/ 1000 c) 

          requests per second. For example, if each requests takes 2 ms of CPU time then the CPU can only handle

          (/ 1000 2) => 500 

          requests per second.

          If the server is thread bound then it can handle at most

          (* t (/ 1000 w)) 

          requests per second. This expression is similar to the one for CPU time, but here we multiply the result by t to account for the t concurrent threads.

          The throughput of a threaded server is the minimum of the CPU and thread bounds since it is subject to both constraints. An evented server is not subject to the thread constraint since it only uses one thread; its throughput is given by the CPU bound. We can express this as follows:

          (defn max-request-rate [t c w]   (let [cpu-bound    (/ 1000 c)         thread-bound (* t (/ 1000 w))]     {:threaded (min cpu-bound thread-bound)      :evented  cpu-bound})) 

          Now we’ll consider some different types of servers and see how they might perform with threaded and evented implementations.

          For the examples below I’ll use a t value of 25. This is a modest number of threads that most threading implementations can handle.

          Let’s start with a classic example: an HTTP proxy server. These servers require very little CPU time, so say c is 0.1 ms. Suppose that the downstream servers can receive the relay within milliseconds for a wall time w of, say, 10 ms. Then we have

          (max-request-rate 25 0.1 10) => {:threaded 2500, :evented 10000} 

          In this case we expect a threaded server to be able to handle 2500 requests per second and an evented server 10000 requests per second. The higher performance of the evented server implies that the thread bound is limiting for the threaded server.

          Another familiar example is the web application server. Let’s first consider the case where we have a lightweight app that does not access any external resources. In this case the request parsing and response generation might take a few milliseconds; say c is 2 ms. Since no blocking calls are made this is the value of w as well. Then

          (max-request-rate 25 2 2) => {:threaded 500, :evented 500} 

          Here the threaded server performs as well as the evented server because the workload is CPU bound.

          Suppose we have a more heavyweight app that is making calls to external resources like the filesystem and database. In this case the amount of CPU time will be somewhat larger that the previous case but still modest; say c is 5 ms. But now that we are waiting on external resources we should expect a w value of, say, 100 ms. Then we have

          (max-request-rate 25 5 100) => {:threaded 200, :evented 200} 

          Even though we are making a lot of blocking calls, the workload is still CPU bound and the threaded and evented servers will therefore perform comparably.

          Suppose now that we are implementing a background service such as an RSS feed fetcher that makes high-latency requests to external services and then performs minimal processing of the results. In this case c may be quite low, say 2 ms, but w will be high, say 250 ms. Then

          (max-request-rate 25 2 250) => {:threaded 100, :evented 500} 

          Here an evented server will perform better. The CPU load is sufficiently low and the external resource latency sufficiently high that the blocking external calls limit the threaded implementation.

          Finally, consider the case of long polling clients. Here clients establish a connection to the server and the server responds only when it has a message it wants to send to the client. Suppose that we have a lightweight app such that c is 1 ms, but that response messages are sent to the client after 10 seconds such that the w value is 10000 ms. Then

          (max-request-rate 25 1 10000) => {:threaded 2.5, :evented 1000} 

          If the server were really limited to 25 threads and each client required its own thread, we could only allow 2.5 new connections per second if we wanted to avoid exceeding the thread allocation. An evented server on the other hand could saturate the CPU by accepting 1000 requests per second.

          Even if we increase the maximum number of threads t by an order of magnitude to 250, the evented approach still fares better:

          (max-request-rate 250 1 10000) => {:threaded 25, :evented 1000} 

          Indeed, a threaded server would need to maintain 10000 threads in order to be able to accept requests at the rate of the evented server.

          Now that we have seen some specific examples of the model we should step back and note the patterns. In general, an evented architecture becomes more favorable as the ratio of wall time w to CPU time c increases, i.e. as proportionally more time is spent waiting on external resources. Also, the viability of a threaded architecture depends on the strength of the underlying threading implementation; the higher the thread threshold t, the more wait time can be tolerated before eventing becomes necessary.

          In addition to the quantitative performance implications captured by this model, there are several qualitative factors that influence the suitability of threaded and evented architectures for particular servers.

          One factor is the fit of the server architecture to the work that the server is doing internally. For example, proxying is well suited to evented architectures because the work being done is fundamentally evented: upon receiving an input chunk from the client the chunk is relayed to a downstream server. In contrast, the business logic implemented by web applications is more naturally described in a synchronous style. The callbacks required by an evented architecture become unwieldy in complex application code.

          Another consideration is memory coordination and consistency. Evented servers executing in a single event loop do not need to worry about the correctness and performance implications of maintaining consistent shared memory, but this may be a problem for threaded servers. Threaded servers therefore attempt to minimize memory shared among threads. This approach works well for the servers that we discussed above - proxies, web applications, background workers, and long poll endpoints - as none of them need to share state internally across client sessions. But fundamentally stateful servers like caches and databases cannot avoid this problem.

          The threaded approach can be a non-starter if the underlying platform does not support proper threading. In these cases blocking calls to external resources prevent the process from using the CPU in other threads, even if the blocker is not itself using the CPU. C Ruby falls into this category. In these cases t is effectively 1, making evented architectures relatively more appealing.

          In the other extreme, the assumption of t being 25 or even 250 may be too modest for some platforms. These low t values are an an artifact of threading implementations and not intrinsic to the threading model itself. More scalable threading implementations make threaded servers viable for higher w to c ratios.

          An evented approach can be compromised by a lack of evented libraries for the platform. For evented servers to perform optimally, all external resources must be accessed through nonblocking libraries. Such libraries are not always available, especially on platforms that have typically used threaded/blocking models like the JVM and C Ruby. Fortunately this situation is improving as developers publish more nonblocking libraries in response to the demand from implementors of evented servers. Indeed, the requirement of pervasive evented libraries for optimal performance is one reason that node.js is so compelling for building evented servers.

          posted @ 2016-04-05 17:47 小馬歌 閱讀(353) | 評論 (0)編輯 收藏
           

          Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.

          Finagle uses a cleansimple, and safe concurrent programming model, based onFutures. This leads to safe and modular programs that are also simple to reason about.

          Finagle clients and servers expose statistics for monitoring and diagnostics. They are also traceable through a mechanism similar to Dapper‘s (another Twitter open source project, Zipkin, provides trace aggregation and visualization).

          The quickstart has an overview of the most important concepts, walking you through the setup of a simple HTTP server and client.

          A section on Futures follows, motivating and explaining the important ideas behind the concurrent programming model used in Finagle. The next section documents Services & Filters which are the core abstractions used to represent clients and servers and modify their behavior.

          Other useful resources include:

          posted @ 2016-04-05 17:46 小馬歌 閱讀(183) | 評論 (0)編輯 收藏
           
               摘要: from:http://www.infoq.com/cn/articles/hadoop-ten-years-interpretation-and-development-forecast編者按:Hadoop于2006年1月28日誕生,至今已有10年,它改變了企業對數據的存儲、處理和分析的過程,加速了大數據的發展,形成了自己的極其火爆的技術生態圈,并受到非常廣泛的應用。在2016年Hadoop十歲...  閱讀全文
          posted @ 2016-03-29 16:59 小馬歌 閱讀(237) | 評論 (0)編輯 收藏
           
          Dubbo是阿里巴巴內部的SOA服務化治理方案的核心框架,每天為2000+ 個服務提供3,000,000,000+ 次訪問量支持,并被廣泛應用于阿里巴巴集團的各成員站點。Dubbo自2011年開源后,已被許多非阿里系公司使用。 

          項目主頁:http://alibaba.github.io/dubbo-doc-static/Home-zh.htm 

          為了使大家對該框架有一個深入的了解,本期我們采訪了Dubbo團隊主要開發人員之一梁飛。 

          ITeye期待并致力于為國內優秀的開源項目提供一個免費的推廣平臺,如果你和你的團隊希望將自己的開源項目介紹給更多的開發者,或者你希望我們對哪些開源項目進行專訪,請告訴我們,發站內短信給ITeye管理員或者發郵件到webmaster@iteye.com即可。

          先來個自我介紹吧!Top

          我叫梁飛,花名虛極,之前負責Dubbo服務框架,現已調到天貓。 

          我的博客:http://javatar.iteye.com

          Dubbo是什么?能做什么?Top

          Dubbo是一個分布式服務框架,以及SOA治理方案。其功能主要包括:高性能NIO通訊及多協議集成,服務動態尋址與路由,軟負載均衡與容錯,依賴分析與降級等。 

          可參見:http://alibaba.github.io/dubbo-doc-static/Home-zh.htm

          Dubbo適用于哪些場景?Top

          當網站變大后,不可避免的需要拆分應用進行服務化,以提高開發效率,調優性能,節省關鍵競爭資源等。 

          當服務越來越多時,服務的URL地址信息就會爆炸式增長,配置管理變得非常困難,F5硬件負載均衡器的單點壓力也越來越大。 

          當進一步發展,服務間依賴關系變得錯蹤復雜,甚至分不清哪個應用要在哪個應用之前啟動,架構師都不能完整的描述應用的架構關系。 

          接著,服務的調用量越來越大,服務的容量問題就暴露出來,這個服務需要多少機器支撐?什么時候該加機器?等等…… 

          在遇到這些問題時,都可以用Dubbo來解決。 

          可參見:Dubbo的背景及需求

          Dubbo的設計思路是什么?Top

          該框架具有極高的擴展性,采用微核+插件體系,并且文檔齊全,很方便二次開發,適應性極強。 

          可參見:開發者指南 - 框架設計

          Dubbo的需求和依賴情況?Top

          Dubbo運行JDK1.5之上,缺省依賴javassist、netty、spring等包,但不是必須依賴,通過配置Dubbo可不依賴任何三方庫運行。 

          可參見:用戶指南 - 依賴

          Dubbo的性能如何?Top

          Dubbo通過長連接減少握手,通過NIO及線程池在單連接上并發拼包處理消息,通過二進制流壓縮數據,比常規HTTP等短連接協議更快。在阿里巴巴內部,每天支撐2000多個服務,30多億訪問量,最大單機支撐每天近1億訪問量。 

          可參見:Dubbo性能測試報告

          和淘寶HSF相比,Dubbo的特點是什么?Top

          1.  Dubbo比HSF的部署方式更輕量,HSF要求使用指定的JBoss等容器,還需要在JBoss等容器中加入sar包擴展,對用戶運行環境的侵入性大,如果你要運行在Weblogic或Websphere等其它容器上,需要自行擴展容器以兼容HSF的ClassLoader加載,而Dubbo沒有任何要求,可運行在任何Java環境中。 

          2.  Dubbo比HSF的擴展性更好,很方便二次開發,一個框架不可能覆蓋所有需求,Dubbo始終保持平等對待第三方理念,即所有功能,都可以在不修改Dubbo原生代碼的情況下,在外圍擴展,包括Dubbo自己內置的功能,也和第三方一樣,是通過擴展的方式實現的,而HSF如果你要加功能或替換某部分實現是很困難的,比如支付寶和淘寶用的就是不同的HSF分支,因為加功能時改了核心代碼,不得不拷一個分支單獨發展,HSF現階段就算開源出來,也很難復用,除非對架構重寫。 

          3.  HSF依賴比較多內部系統,比如配置中心,通知中心,監控中心,單點登錄等等,如果要開源還需要做很多剝離工作,而Dubbo為每個系統的集成都留出了擴展點,并已梳理干清所有依賴,同時為開源社區提供了替代方案,用戶可以直接使用。 

          4.  Dubbo比HSF的功能更多,除了ClassLoader隔離,Dubbo基本上是HSF的超集,Dubbo也支持更多協議,更多注冊中心的集成,以適應更多的網站架構。

          Dubbo在安全機制方面是如何解決的?Top

          Dubbo主要針對內部服務,對外的服務,阿里有開放平臺來處理安全和流控,所以Dubbo在安全方面實現的功能較少,基本上只防君子不防小人,只防止誤調用。 

          Dubbo通過Token令牌防止用戶繞過注冊中心直連,然后在注冊中心上管理授權。Dubbo還提供服務黑白名單,來控制服務所允許的調用方。 

          可參見:Dubbo的令牌驗證

          Dubbo在阿里巴巴內部以及外部的應用情況?Top

          在阿里內部,除淘系以外的其它阿里子公司,都在使用Dubbo,包括:中文主站,國際主站,AliExpress,阿里云,阿里金融,阿里學院,良無限,來往等等。 

          開源后,已被:去哪兒,京東,吉利汽車,方正證劵,海爾,焦點科技,中潤四方,華新水泥,海康威視,等公司廣泛使用,并不停的有新公司加入,社區討論及貢獻活躍,得到用戶很高的評價。 

          可參見:Dubbo的已知用戶

          在分布式事務、多語言支持方面,Dubbo的計劃是什么?Top

          分布式事務可能暫不會支持,因為如果只是支持簡單的XA/JTA兩階段提交事務,實用性并不強。用戶可以自行實現業務補償的事件,或更復雜的分布式事務,Dubbo有很多擴展點可以集成。 

          在多語言方面,Dubbo實現了C++版本,但在內部使用面極窄,沒有得到很強的驗證,并且C++開發資源緊張,沒有精力準備C++開源事項。

          Dubbo采用的開源協議?商業應用應該注意哪些事項?Top

          Dubbo采用Apache License 2.0開源協議,它是一個商業友好的協議,你可以免費用于非開源的商業軟件中。 

          你可以對它進行改造和二次發布,只要求保留阿里的著作權,并在再發布時保留原始許可聲明。 

          可參見:Dubbo的開源許可證

          Dubbo開發團隊情況?Top

          Dubbo共有六個開發人員參與開發和測試,每一個開發人員都是很有經驗,團隊合作很默契,開發過程也很有節奏,有完善質量保障流程。團隊組成: 

          • 梁飛 (開發人員/產品管理)
          • 劉昊旻 (開發人員/過程管理)
          • 劉超 (開發人員/用戶支持)
          • 李鼎 (開發人員/用戶支持)
          • 陳雷 (開發人員/質量保障)
          • 閭剛 (開發人員/開源運維)
           
          從左至右:劉超,梁飛,閭剛,陳雷,劉昊旻,李鼎

          可參見:Dubbo的團隊成員

          其他開發者如何參與?可以做哪些工作?Top

          開發者可以在Github上fork分支,然后將修改push過來,我們審核并測試后,會合并到主干中。 

          Github地址:https://github.com/alibaba/dubbo 

          開發者可以在JIRA上認領小的BUG修復,也可以在開發者指南頁面領取大的功能模塊。 

          JIRA:http://code.alibabatech.com/jira/browse/DUBBO(暫不可用) 

          開發者指南:http://alibaba.github.io/dubbo-doc-static/Developer+Guide-zh.htm

          Dubbo未來的發展計劃?Top

          Dubbo的RPC框架已基本穩定,未來的重心會放在服務治理上,包括架構分析、監控統計、降級控制、流程協作等等。 

          可參見:http://alibaba.github.io/dubbo-doc-static/Roadmap-zh.htm
          posted @ 2016-03-24 13:21 小馬歌 閱讀(568) | 評論 (0)編輯 收藏
          僅列出標題
          共95頁: First 上一頁 5 6 7 8 9 10 11 12 13 下一頁 Last 
           
          主站蜘蛛池模板: 冷水江市| 全南县| 长葛市| 伊宁县| 习水县| 乌兰浩特市| 海丰县| 鄂托克旗| 禹州市| 织金县| 礼泉县| 西乌珠穆沁旗| 留坝县| 彰化县| 丰县| 城市| 德惠市| 行唐县| 宁蒗| 梁河县| 邵东县| 深泽县| 罗平县| 平舆县| 游戏| 汝州市| 泽州县| 建始县| 佛坪县| 江西省| 百色市| 尖扎县| 通州区| 六安市| 鄂州市| 什邡市| 娄底市| 大关县| 梁河县| 通山县| 六安市|