??xml version="1.0" encoding="utf-8" standalone="yes"?>青青草在线播放,都市激情亚洲一区,国产免费av在线http://www.aygfsteel.com/ytl-zlq/<font size="4" >厚积而薄?--每一天都是一个全新的开?lt;/font>zh-cnSat, 17 May 2025 21:15:13 GMTSat, 17 May 2025 21:15:13 GMT60最大公U数http://www.aygfsteel.com/ytl-zlq/archive/2013/03/21/396781.htmlytlytlThu, 21 Mar 2013 01:39:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2013/03/21/396781.htmlhttp://www.aygfsteel.com/ytl-zlq/comments/396781.htmlhttp://www.aygfsteel.com/ytl-zlq/archive/2013/03/21/396781.html#Feedback0http://www.aygfsteel.com/ytl-zlq/comments/commentRss/396781.htmlhttp://www.aygfsteel.com/ytl-zlq/services/trackbacks/396781.html阅读全文

ytl 2013-03-21 09:39 发表评论
]]>
jvm的内存模型之eden? 转蝲http://www.aygfsteel.com/ytl-zlq/archive/2012/03/01/371093.htmlytlytlThu, 01 Mar 2012 10:12:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2012/03/01/371093.htmlhttp://www.aygfsteel.com/ytl-zlq/comments/371093.htmlhttp://www.aygfsteel.com/ytl-zlq/archive/2012/03/01/371093.html#Feedback0http://www.aygfsteel.com/ytl-zlq/comments/commentRss/371093.htmlhttp://www.aygfsteel.com/ytl-zlq/services/trackbacks/371093.html


谈java内存模型 

       不同的^収ͼ内存模型是不一LQ但是jvm的内存模型规范是l一的。其实java的多U程q发问题最l都会反映在java的内存模型上Q所谓线E安全无 非是要控制多个线E对某个资源的有序访问或修改。ȝjava的内存模型,要解决两个主要的问题Q可见性和有序性。我们都知道计算机有高速缓存的存在Q处 理器q不是每ơ处理数据都是取内存的。JVM定义了自q内存模型Q屏蔽了底层q_内存理l节Q对于java开发h员,要清楚在jvm内存模型的基 上,如果解决多线E的可见性和有序性?br />       那么Q何?strong>可见?/strong>Q?多个U程之间是不能互怼递数据通信的,它们之间的沟通只能通过׃n变量来进行。Java内存模型QJMMQ规定了jvm有主内存Q主内存是多个线E共?的。当new一个对象的时候,也是被分配在d存中Q每个线E都有自q工作内存Q工作内存存储了d的某些对象的副本Q当然线E的工作内存大小是有限制 的。当U程操作某个对象Ӟ执行序如下Q?br /> (1) 从主存复制变量到当前工作内存 (read and load)
 (2) 执行代码Q改变共享变量?(use and assign)
 (3) 用工作内存数据刷C存相兛_?(store and write)

JVM规范定义了线E对d的操作指 令:readQloadQuseQassignQstoreQwrite。当一个共享变量在多个U程的工作内存中都有副本Ӟ如果一个线E修改了q个׃n 变量Q那么其他线E应该能够看到这个被修改后的|q就是多U程的可见性问题?br />        那么Q什么是有序?/strong>?Q线E在引用变量时不能直接从d存中引用,如果U程工作内存中没有该变量,则会从主内存中拷贝一个副本到工作内存?q个q程为read-load,?成后U程会引用该副本。当同一U程再度引用该字D|,有可能重Cd中获取变量副?read-load-use),也有可能直接引用原来的副?(use),也就是说 read,load,use序可以由JVM实现pȝ军_?br />        U程不能直接Z存中中字D赋|它会值指定给工作内存中的变量副本(assign),完成后这个变量副本会同步C存储?store- write)Q至于何时同步过去,ҎJVM实现pȝ军_.有该字段,则会从主内存中将该字D赋值到工作内存?q个q程为read-load,完成后线 E会引用该变量副本,当同一U程多次重复对字D赋值时,比如Q?br />Java代码 
for(int i=0;i<10;i++)   
 a++;  
U程有可能只对工作内存中的副本进行赋?只到最后一ơ赋值后才同步到d储区Q所以assign,store,weite序可以由JVM实现pȝ?定。假设有一个共享变量xQ线Ea执行x=x+1。从上面的描qC可以知道x=x+1q不是一个原子操作,它的执行q程如下Q?br />1 从主存中d变量x副本到工作内?br />2 lx?
3 x?后的值写回主 ?br />如果另外一个线Eb执行x=x-1Q执行过E如下:
1 从主存中d变量x副本到工作内?br />2 lx?
3 x?后的值写回主?nbsp;
那么昄Q最l的x的值是不可靠的。假设x现在?0Q线Ea?Q线Eb?Q从表面上看Q似乎最lxq是?0Q但是多U程情况下会有这U情况发生:
1Q线Ea从主存读取x副本到工作内存,工作内存中xgؓ10
2Q线Eb从主存读取x副本到工作内存,工作内存中xgؓ10
3Q线Ea工作内存中x?Q工作内存中xgؓ11
4Q线Eax提交d中,d中x?1
5Q线Eb工作内存中x值减1Q工作内存中xgؓ9
6Q线Ebx提交Cd中,d中x?/p>

 

jvm的内存模型之eden?/strong>

所谓线E的“工作内存”到底是个什么东西?有的为是U程的栈Q其实这U理解是不正的。看看JLSQjava语言规范Q对U程工作 内存的描qͼU程的working memory只是cpu?strong>寄存器和高速缓存的抽象描述?/p>

      可能 很多人都觉得莫名其妙Q说JVM的内存模型,怎么会扯到cpu上去呢?在此Q我认ؓ很有必要阐述下,?得很多h看得不明不白的。先抛开java虚拟Z谈,我们都知道,现在的计机Qcpu在计的时候,q不L从内存读取数据,它的数据d序优先U?是:寄存器-高速缓存-内存。线E耗费的是CPUQ线E计的时候,原始的数据来自内存,在计过E中Q有些数据可能被频繁dQ这些数据被存储在寄存器 和高速缓存中Q当U程计算完后Q这些缓存的数据在适当的时候应该写回内存。当个多个线E同时读写某个内存数据时Q就会生多U程q发问题Q涉及到三个?性:原子性,有序性,可见性。在《线E安全ȝ》这文章中Qؓ了理解方便,我把原子性和有序性统一叫做“多线E执行有序?#8221;。支持多U程的^台都会面?q种问题Q运行在多线E^C支持多线E的语言应该提供解决该问题的Ҏ?/p>

      synchronized, volatile,锁机Ӟ如同步块Q就l队 列,d队列Q?/strong>{等。这些方案只是语法层面的Q但我们要从本质上去理解它,不能仅仅知道一?synchronized 可以保证同步完了?nbsp;  在这里我说的是jvm的内存模型,是动态的Q面向多U程q发的,沿袭JSL?#8220;working memory”的说法,只是不想牉|到太多底层细节,因ؓ 《线E安全ȝ》这文章意在说明怎样从语法层面去理解java的线E同步,知道各个关键字的使用?景?/p>

说说JVM的eden区吧。JVM的内存,被划分了很多的区域:

1.E序计数?br />每一个JavaU程都有一个程序计数器来用于保存程序执行到当前Ҏ的哪一个指令?br />2.U程?br />U程的每个方法被执行的时候,都会同时创徏一个QFrameQ用于存储本地变量表、操作栈、动态链接、方法出入口{信息。每一个方法的调用臛_成,意味着一个在VM栈中的入栈至出栈的过E。如果线E请求的栈深度大于虚拟机所允许的深度,抛出StackOverflowError异常Q如果VM栈可以动态扩展(VM Spec中允许固定长度的VM栈)Q当扩展时无法申请到_内存则抛出OutOfMemoryError异常?br />3.本地Ҏ?br />4.?br />每个U程的栈都是该线E私有的Q堆则是所有线E共享的。当我们new一个对象时Q该对象p分配C堆中。但是堆Qƈ不是一个简单的概念Q堆区又划分了很多区域,Z么堆划分成这么多区域Q这是ؓ了JVM的内存垃圾收集,g扯远了,扯到垃圾攉了,现在的jvm的gc都是按代攉Q堆区大致被分ؓ三大块:新生代,旧生代,持久代(虚拟的)Q新生代又分为eden区,s0区,s1区。新Z个对象时Q基本小的对象,生命周期短的对象都会攑֜新生代的edenZQeden区满Ӟ有一个小范围的gcQminor gcQ,整个新生代满Ӟ会有一个大范围的gcQmajor gcQ,新生代里的部分对象转到旧生代里?br />5.Ҏ?nbsp;
其实是怹代(Permanent GenerationQ,ҎZ存放了每个Class的结构信息,包括帔R池、字D|q、方法描q等{。VM Space描述中对q个区域的限刉常宽松,除了和Java堆一样不需要连l的内存Q也可以选择固定大小或者可扩展外,甚至可以选择不实现垃圾收集。相Ҏ_垃圾攉行ؓ在这个区域是相对比较发生的Q但q不是某些描q那h久代不会发生GCQ至 对当前L的商业JVM实现来说是如此)Q这里的GC主要是对帔R池的回收和对cȝ卸蝲Q虽然回收的“成W”一般也比较差强人意Q尤其是cd载,条g相当苛刻?br />6.帔R?br /> Class文g中除了有cȝ版本、字Dc方法、接口等描述{信息外Q还有一信息是帔R?constant_pool table)Q用于存攄译期已可知的帔RQ这部分内容在cd载后q入Ҏ区(怹代)存放。但是Java语言q不要求帔R一定只有编译期预置入Class的常量表的内Ҏ能进入方法区帔R池,q行期间也可新内容攑օ帔R池(最典型的String.intern()ҎQ?/p>



ytl 2012-03-01 18:12 发表评论
]]>
Java 原码代码学习http://www.aygfsteel.com/ytl-zlq/archive/2011/09/24/359414.htmlytlytlSat, 24 Sep 2011 07:30:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/09/24/359414.html       关于Java中的transientQvolatile和strictfp关键?nbsp;http://www.iteye.com/topic/52957
       (1), ArrayList底层使用Object数据实现Q?nbsp;private transient Object[] elementData;且在使用不带参数的方式实例化Ӟ生成数组默认的长度是10?br />      (2),  addҎ实现
      public boolean add(E e) {
           //ensureCapacityInternal判断d新元素是否需要重新扩大数l的长度Q需要则扩否则不
          ensureCapacityInternal(size + 1);  // 此ؓJDK7调用的方?JDK5里面使用的ensureCapacityҎ
          elementData[size++] = e; //把对象插入数l,同时把数l存储的数据长度size?
          return true;
      }
     JDK 7?nbsp;ensureCapacityInternal实现
   private void ensureCapacityInternal(int minCapacity) {
        modCount++;修改ơ数
        // overflow-conscious code
        if (minCapacity - elementData.length > 0)
            grow(minCapacity);//如果需要扩大数l长?/div>
    }
/**
     * The maximum size of array to allocate. --甌新数l最大长?/div>
     * Some VMs reserve some header words in an array.
     * Attempts to allocate larger arrays may result in
     * OutOfMemoryError: Requested array size exceeds VM limit  --如果甌的数l占用的内心大于JVM的限制抛出异?/div>
     */
    private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;//Z么减?看注释第2?/div>
    /**
     * Increases the capacity to ensure that it can hold at least the
     * number of elements specified by the minimum capacity argument.
     *
     * @param minCapacity the desired minimum capacity
     */
    private void grow(int minCapacity) {
        // overflow-conscious code
        int oldCapacity = elementData.length;
        int newCapacity = oldCapacity + (oldCapacity >> 1); //新申L长度为old?/2倍同时用位U运更高效QJDK5中: (oldCapacity *3)/2+1
        if (newCapacity - minCapacity < 0)  
            newCapacity = minCapacity; 
        if (newCapacity - MAX_ARRAY_SIZE > 0) //你懂?/div>
            newCapacity = hugeCapacity(minCapacity);
        // minCapacity is usually close to size, so this is a win:
        elementData = Arrays.copyOf(elementData, newCapacity);
    }
 //可以甌的最大长?/div>
    private static int hugeCapacity(int minCapacity) { 
        if (minCapacity < 0) // overflow
            throw new OutOfMemoryError();
        return (minCapacity > MAX_ARRAY_SIZE) ?
            Integer.MAX_VALUE :
            MAX_ARRAY_SIZE;
    }





ytl 2011-09-24 15:30 发表评论
]]>计量炏V计量分cȝhttp://www.aygfsteel.com/ytl-zlq/archive/2011/08/07/355934.htmlytlytlSun, 07 Aug 2011 03:02:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/08/07/355934.html阅读全文

ytl 2011-08-07 11:02 发表评论
]]>
Alogrithms to quicksorthttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/08/349777.htmlytlytlSun, 08 May 2011 06:13:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/08/349777.html

Quicksort

Quicksort is a fast sorting algorithm, which is used not only for educational purposes, but widely applied in practice. On the average, it has O(n log n) complexity, making quicksort suitable for sorting big data volumes. The idea of the algorithm is quite simple and once you realize it, you can write quicksort as fast as bubble sort.

Algorithm

The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
  1. Choose a pivot value. We take the value of the middle element as pivot value, but it can be any value, which is in range of sorted values, even if it doesn't present in the array.
  2. Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivot, go to the right part of the array. Values equal to the pivot can stay in any part of the array. Notice, that array may be divided in non-equal parts.
  3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.

Partition algorithm in detail

There are two indices i and j and at the very beginning of the partition algorithm i points to the first element in the array andj points to the last one. Then algorithm moves i forward, until an element with value greater or equal to the pivot is found. Index j is moved backward, until an element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes greater than j.

After partition, all values before i-th element are less or equal than the pivot and all values after j-th element are greater or equal to the pivot.

Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.

Quicksort example

Notice, that we show here only the first recursion step, in order not to make example too long. But, in fact, {1, 2, 5, 7, 3} and {14, 7, 26, 12} are sorted then recursively.

Why does it work?

On the partition step algorithm divides the array into two parts and every element a from the left part is less or equal than every element b from the right part. Also a and b satisfy a ≤ pivot ≤ b inequality. After completion of the recursion calls both of the parts become sorted and, taking into account arguments stated above, the whole array is sorted.

Complexity analysis

On the average quicksort has O(n log n) complexity, but strong proof of this fact is not trivial and not presented here. Still, you can find the proof in [1]. In worst case, quicksort runs O(n2) time, but on the most "practical" data it works just fine and outperforms other O(n log n) sorting algorithms.

Code snippets

Java

int partition(int arr[], int left, int right)

{

     int i = left;

     int j = right;

     int temp;

    int  pivot = arr[(left+right)>>1];

     while(i<=j){

        while(arr[i]>=pivot){

            i++;

        }

        while(arr[j]<=pivot){

            j--;

       }

       if(i<=j){

           temp = arr[i];

           arr[i] = arr[j];

           arr[j] = temp;

           i++;

           j--;

       }

    }

    return i

}

 

void quickSort(int arr[], int left, int right) {

      int index = partition(arr, left, right);

      if(left<index-1){

         quickSort(arr,left,index-1);

      }

      if(index<right){

         quickSort(arr,index,right); 

      }

}

python

def quickSort(L,left,right) {

      i = left

      j = right

      if right-left <=1:

            return L

      pivot = L[(left + right) >>1];

      /* partition */

      while (i <= j) {

            while (L[i] < pivot)

                  i++;

            while (L[j] > pivot)

                  j--;

            if (i <= j) {

                  L[i],L[j] = L[j],L[i]

                  i++;

                  j--;

            }

      };

      /* recursion */

      if (left < j)

            quickSort(Lleftj);

      if (i < right)

            quickSort(Liright);

}



ytl 2011-05-08 14:13 发表评论
]]>
Algorithms to Insertion Sorthttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/08/349773.htmlytlytlSun, 08 May 2011 04:24:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/08/349773.html

Insertion Sort

Insertion sort belongs to the O(n2) sorting algorithms. Unlike many sorting algorithms with quadratic complexity, it is actually applied in practice for sorting small arrays of data. For instance, it is used to improve quicksort routine. Some sources notice, that people use same algorithm ordering items, for example, hand of cards.

Algorithm

Insertion sort algorithm somewhat resembles selection sort. Array is imaginary divided into two parts - sorted one andunsorted one. At the beginning, sorted part contains first element of the array and unsorted one contains the rest. At every step, algorithm takes first element in the unsorted part and inserts it to the right place of the sorted one. Whenunsorted part becomes empty, algorithm stops. Sketchy, insertion sort algorithm step looks like this:

Insertion sort sketchy, before insertion

becomes

Insertion sort sketchy, after insertion

The idea of the sketch was originaly posted here.

Let us see an example of insertion sort routine to make the idea of algorithm clearer.

Example. Sort {7, -5, 2, 16, 4} using insertion sort.

Insertion sort example

The ideas of insertion

The main operation of the algorithm is insertion. The task is to insert a value into the sorted part of the array. Let us see the variants of how we can do it.

"Sifting down" using swaps

The simplest way to insert next element into the sorted part is to sift it down, until it occupies correct position. Initially the element stays right after the sorted part. At each step algorithm compares the element with one before it and, if they stay in reversed order, swap them. Let us see an illustration.

insertion sort, sift down illustration

This approach writes sifted element to temporary position many times. Next implementation eliminates those unnecessary writes.

Shifting instead of swapping

We can modify previous algorithm, so it will write sifted element only to the final correct position. Let us see an illustration.

insertion sort, shifting illustration

It is the most commonly used modification of the insertion sort.

Using binary search

It is reasonable to use binary search algorithm to find a proper place for insertion. This variant of the insertion sort is calledbinary insertion sort. After position for insertion is found, algorithm shifts the part of the array and inserts the element. This version has lower number of comparisons, but overall average complexity remains O(n2). From a practical point of view this improvement is not very important, because insertion sort is used on quite small data sets.

Complexity analysis

Insertion sort's overall complexity is O(n2) on average, regardless of the method of insertion. On the almost sorted arrays insertion sort shows better performance, up to O(n) in case of applying insertion sort to a sorted array. Number of writes is O(n2) on average, but number of comparisons may vary depending on the insertion algorithm. It is O(n2) when shifting or swapping methods are used and O(n log n) for binary insertion sort.

From the point of view of practical application, an average complexity of the insertion sort is not so important. As it was mentioned above, insertion sort is applied to quite small data sets (from 8 to 12 elements). Therefore, first of all, a "practical performance" should be considered. In practice insertion sort outperforms most of the quadratic sorting algorithms, like selection sort or bubble sort.

Insertion sort properties

  • adaptive (performance adapts to the initial order of elements);
  • stable (insertion sort retains relative order of the same elements);
  • in-place (requires constant amount of additional space);
  • online (new elements can be added during the sort).

Code snippets

We show the idea of insertion with shifts in Java implementation and the idea of insertion using python code snippet.

Java implementation

void insertionSort(int[] arr) {

      int i,j,newValue;

      for(i=1;i<arr.length;i++){

           newValue = arr[i];

           j=i;

           while(j>0&&arr[j-1]>newValue){

               arr[j] = arr[j-1];

               j--;

           }

           arr[j] = newValue;

}

Python implementation

void insertionSort(L) {

      for i in range(l,len(L)):

            j = i

            newValue = L[i]

            while j > 0 and  L[j - 1] >L[j]:

                 L[j] = L[j - 1]

                  j = j-1

            }

            L[j] = newValue

      }

}



ytl 2011-05-08 12:24 发表评论
]]>
Binary search algorithmhttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349702.htmlytlytlFri, 06 May 2011 10:11:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349702.html

Binary search algorithm

Generally, to find a value in unsorted array, we should look through elements of an array one by one, until searched value is found. In case of searched value is absent from array, we go through all elements. In average, complexity of such an algorithm is proportional to the length of the array.

Situation changes significantly, when array is sorted. If we know it, random access capability can be utilized very efficientlyto find searched value quick. Cost of searching algorithm reduces to binary logarithm of the array length. For reference, log2(1 000 000) ?20. It means, that in worst case, algorithm makes 20 steps to find a value in sorted array of a million elements or to say, that it doesn't present it the array.

Algorithm

Algorithm is quite simple. It can be done either recursively or iteratively:

  1. get the middle element;
  2. if the middle element equals to the searched value, the algorithm stops;
  3. otherwise, two cases are possible:
    • searched value is less, than the middle element. In this case, go to the step 1 for the part of the array, before middle element.
    • searched value is greater, than the middle element. In this case, go to the step 1 for the part of the array, after middle element.
Now we should define, when iterations should stop. First case is when searched element is found. Second one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present in the array.

Examples

Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.

Step 1 (middle element is 19 > 6):     -1  5  6  18  19  25  46  78  102  114

Step 2 (middle element is 5 < 6):      -1  5  6  18  19  25  46  78  102  114

Step 3 (middle element is 6 == 6):     -1  5  6  18  19  25  46  78  102  114

Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.

Step 1 (middle element is 19 < 103):   -1  5  6  18  19  25  46  78  102  114

Step 2 (middle element is 78 < 103):   -1  5  6  18  19  25  46  78  102  114

Step 3 (middle element is 102 < 103):  -1  5  6  18  19  25  46  78  102  114

Step 4 (middle element is 114 > 103):  -1  5  6  18  19  25  46  78  102  114

Step 5 (searched value is absent):     -1  5  6  18  19  25  46  78  102  114

Complexity analysis

Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically in worst case. In practice it means, that algorithm will do at most log2(n) iterations, which is a very small number even for big arrays. It can be proved very easily. Indeed, on every step the size of the searched part is reduced by half. Algorithm stops, when there are no elements to search in. Therefore, solving following inequality in whole numbers:

n / 2iterations > 0

resulting in

iterations <= log2(n).

It means, that binary search algorithm time complexity is O(log2(n)).

Code snippets.

You can see recursive solution for Java and iterative for python below.

Java

int binarySearch(int[] array, int value, int left, int right) {

      if (left > right)

            return -1;

      int middle = left + (right-left) / 2;

      if (array[middle] == value)

            return middle;

      if (array[middle] > value)

            return binarySearch(array, value, left, middle - 1);

      else

            return binarySearch(array, value, middle + 1, right);           

}

Python

def biSearch(L,e,first,last):

      if last - first < 2: return L[first] == e or L[last] == e

      mid = first + (last-first)/2

      if L[mid] ==e: return True

      if L[mid]> e : 

            return biSearch(L,e,first,mid-1)

      return biSearch(L,e,mid+1,last)

      



ytl 2011-05-06 18:11 发表评论
]]>
Algorithm to merge sortehttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349695.htmlytlytlFri, 06 May 2011 09:05:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349695.htmlMerge sort is an O(n log ncomparison-based sorting algorithm. Most implementations produce a stable sort, meaning that the implementation preserves the input order of equal elements in the sorted output. It is a divide and conquer algorithm. Merge sort was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report byGoldstine and Neumann as early as 1948
 divide and conquer algorithm: 1, split the problem into several subproblem of the same type. 2,solove independetly. 3 combine those solutions



Python Implement
  
  def mergeSort(L):
       
         if len(L) < 2 :
               return  L
         middle = len(L)/2
         left = mergeSort(L[:mddle])
         right = mergeSort(L[middle:])
         together = merge(left,right)
         return together


ytl 2011-05-06 17:05 发表评论
]]>
Algorithm to merge sorted arrayshttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349692.htmlytlytlFri, 06 May 2011 08:55:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349692.html

Algorithm to merge sorted arrays

In the article we present an algorithm for merging two sorted arrays. One can learn how to operate with several arrays and master read/write indices. Also, the algorithm has certain applications in practice, for instance in merge sort.

Merge algorithm

Assume, that both arrays are sorted in ascending order and we want resulting array to maintain the same order. Algorithm to merge two arrays A[0..m-1] and B[0..n-1] into an array C[0..m+n-1] is as following:

  1. Introduce read-indices ij to traverse arrays A and B, accordingly. Introduce write-index k to store position of the first free cell in the resulting array. By default i = j = k = 0.
  2. At each step: if both indices are in range (i < m and j < n), choose minimum of (A[i], B[j]) and write it toC[k]. Otherwise go to step 4.
  3. Increase k and index of the array, algorithm located minimal value at, by one. Repeat step 2.
  4. Copy the rest values from the array, which index is still in range, to the resulting array.

Enhancements

Algorithm could be enhanced in many ways. For instance, it is reasonable to check, if A[m - 1] < B[0] orB[n - 1] < A[0]. In any of those cases, there is no need to do more comparisons. Algorithm could just copy source arrays in the resulting one in the right order. More complicated enhancements may include searching for interleaving parts and run merge algorithm for them only. It could save up much time, when sizes of merged arrays differ in scores of times.

Complexity analysis

Merge algorithm's time complexity is O(n + m). Additionally, it requires O(n + m) additional space to store resulting array.

Code snippets

Java implementation

// size of C array must be equal or greater than

// sum of A and B arrays' sizes

public void merge(int[] A, int[] B, int[] C) {

      int i,j,k ;

      i = 0;

      j=0;

      k=0;

      m = A.length;

      n = B.length;

      while(i < m && j < n){

          if(A[i]<= B[j]){

              C[k] = A[i];

              i++;

          }else{

              C[k] = B[j];

              j++;

       }

       k++;

       while(i<m){

         C[k] = A[i]

         i++;

         k++;

      }

      while(j<n){

         C[k] = B[j] 

         j++;

          k++;

 }


Python  implementation

def merege(left,right):

    result = []

    i,j = 0

   while i< len(left) and j < len(right):

        if left[i]<= right[j]:

            result.append(left[i])

            i = i + 1

        else:

            result.append(right[j])

            j = j + 1

    while i< len(left):

           result.append(left[i])

           i = i + 1

    while j< len(right):

           result.append(right[j])

           j = j + 1

    return result

  
MergSort:

import operator

def mergeSort(L, compare = operator.lt):
     if len(L) < 2:
          return L[:]
     else:
          middle = int(len(L)/2)
          left = mergeSort(L[:middle], compare)
          right= mergeSort(L[middle:], compare)
          return merge(left, right, compare)

def merge(left, right, compare):
     result = []
     i, j = 0, 0

     while i < len(left) and j < len(right):
          if compare(left[i], right[j]):
               result.append(left[i])
               i += 1
          else:
                result.append(right[j])
                j += 1
     while i < len(left):
          result.append(left[i])
          i += 1
     while j < len(right):
          result.append(right[j])
          j += 1
     return result
               



ytl 2011-05-06 16:55 发表评论
]]>
Sorting algorithms --Selection Sorthttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349687.htmlytlytlFri, 06 May 2011 08:16:00 GMThttp://www.aygfsteel.com/ytl-zlq/archive/2011/05/06/349687.html

Selection Sort

Selection sort is one of the O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Selection sort is notable for its programming simplicity and it can over perform other sorts in certain situations (see complexity analysis for more details).

Algorithm

The idea of algorithm is quite simple. Array is imaginary divided into two parts - sorted one and unsorted one. At the beginning, sorted part is empty, while unsorted one contains whole arrayAt every step, algorithm finds minimal element in the unsorted part and adds it to the end of the sorted one. When unsorted part becomes empty, algorithmstops.

When algorithm sorts an array, it swaps first element of unsorted part with minimal element and then it is included to the sorted part. This implementation of selection sort in not stable. In case of linked list is sorted, and, instead of swaps, minimal element is linked to the unsorted part, selection sort is stable.

Let us see an example of sorting an array to make the idea of selection sort clearer.

Example. Sort {5, 1, 12, -5, 16, 2, 12, 14} using selection sort.

Selection sort example

Complexity analysis

Selection sort stops, when unsorted part becomes empty. As we know, on every step number of unsorted elements decreased by one. Therefore, selection sort makes n steps (n is number of elements in array) of outer loop, before stop. Every step of outer loop requires finding minimum in unsorted part. Summing up, n + (n - 1) + (n - 2) + ... + 1, results in O(n2) number of comparisons. Number of swaps may vary from zero (in case of sorted array) to n - 1 (in case array was sorted in reversed order), which results in O(n) number of swaps. Overall algorithm complexity is O(n2).

Fact, that selection sort requires n - 1 number of swaps at most, makes it very efficient in situations, when write operation is significantly more expensive, than read operation.

Code snippets

Java

public void selectionSort(int[] arr) {

      int i, j, minIndex, tmp;

      int n = arr.length;

      for (i = 0; i < n - 1; i++) {

            minIndex = i;

            for (j = i + 1; j < n; j++)

                  if (arr[j] < arr[minIndex])

                        minIndex = j;

            if (minIndex != i) {

                  tmp = arr[i];

                  arr[i] = arr[minIndex];

                  arr[minIndex] = tmp;

            }

      }

}

Python

     for i in range(len(L)-1):
          minIndex = i
          minValue = L[i]
          j = i + 1
          while j< len(L):
               if minValue > L[j]:
                    minIndex = j
                    minValue = L[j]
               j += 1
          if minIndex != i:
               temp       = L[i]
               L[i]       = L[minIndex]
               L[minIndex] = temp




ytl 2011-05-06 16:16 发表评论
]]>
վ֩ģ壺 | | | | | | ˮ| | | Ͳ| | | DZ| | | | ³| | | | ˮ| | ˳| | | ͨμ| | Ͷ| | | ̩| ̨| Ǹ| | ľ| | | ˷| | ǿ| |