Sorting Algorithms

1.   


     What is an Algorithm?

An algorithm is a set of commands that must be followed for a computer to perform calculations or other problem-solving operations.

Definition

Algorithm is a finite set of instructions carried out in a specific order to perform a particular task. 

 

Characteristics of an Algorithm

 

1) Well-defined Inputs

 Well-defined inputs ensure that the algorithm's behaviour is     deterministic, which means, that the same input will always     produce the same output.

   Unambiguous inputs help prevent incorrect implementations and misunderstanding of the algorithm's requirements.

2) Well-defined Outputs

The outputs of an algorithm should be well-defined to ensure that the algorithm produces the intended and accurate result for a given set of inputs.

 It avoids ambiguity and guarantees that the algorithm solves the problem correctly.

3) Unambiguity

 Ambiguity in the algorithm's description can lead to incorrect implementations and unreliable results. That is why it is important for an algorithm to be unambiguous.

It is also easier for unambiguous to be standardized and used widely for different applications.

4) Finiteness

The algorithm should end after a finite amount of time, and it should have a limited number of instructions.

  A finite algorithm ensures that it will eventually stop executing and produce a result.

An infinite algorithm would never reach a conclusion, which is impractical in real-world scenarios where computation cannot be performed infinitely.

5) Language Independent

         A language-independent algorithm can be easily ported to different programming languages and platforms, making it more adaptable and usable across various environments.

       Being language-independent also makes an algorithm future-proof which means it can be implemented easily using newer programming languages.

 

  Time and Space complexity

 Time complexity is defined in terms of how many times it takes to run a given algorithm, based on the length of the input.

 Time complexity is not a measurement of how much time it takes to execute a particular algorithm.

What Is Space Complexity?

    When an algorithm is run on a computer, it necessitates a         certain amount of memory space. The amount of memory     used by a program to execute it is represented by its space         complexity

    What Are Asymptotic Notations?

    Asymptotic Notations are programming languages that            allow you to analyze an algorithm's running time by                identifying its behavior as its input size grows.

    This is also referred to as an algorithm's growth rate.It            compares two algorithms based on changes in their                performance as the input size is increased or decreased.

 Asymptotic notations are classified into three types:

1.  Big-Oh (O) notation

2.  Big Omega ( Ω ) notation

3.  Big Theta ( Θ ) notation

Best Case, Worst Case, and Average Case in Asymptotic Analysis

1. Big-Oh (O) Notation

It specifies the upper bound of a function, i.e., the maximum time required by an algorithm its refers as  the worst-case time complexity. 

Worst Case: It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time possible. In this case, the execution time serves as an upper bound on the algorithm's time complexity

2. Big-Omega (Ω) notation

Big-Omega is an Asymptotic Notation for the best case or a floor growth rate for a given function. It gives you an asymptotic lower bound on the growth rate of an algorithm's runtime known as Best Case.

Best Case: It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time.

3. Big-Theta (Θ) notation

Big theta defines a function's  both lower and upper bounds, i.e., it exists as both, most and least boundaries for a given input value.

Average Case: It add the running times for each possible input combination and take the average in the average case. Here, the execution time serves as both a lower and upper bound on the algorithm's time complexity

Recursive Algorithm

 A recursive sorting algorithm calls on itself to sort a smaller     part of the array, then combining the partially sorted results. Recursive Algorithm slower because before each function call the current state of function is stored in stack. After the return statement the previous function state restored from stack.

        Non-Recursive Algorithm

A non-recursive algorithm does the sorting all at             once, without calling itself. Bubble-sort is an example of a non-recursive algorithm-Recursive Algorithms execution is faster because it doesn’t use stack. In recursive algo Memory usage is more as stack is used to store the current function state. Memory usage is less in Non-Recursive Algorithm as it doesn’t use stack. Ime complexity of recursion can be finding the    value of the n th recursive call-in terms of the previous calls. Time complexity of Non-Recursive algorithms can be found by finding the number of  cycles being repeated inside the loop.

 Sorting Algorithm

A sorting algorithm is used to arrange elements of an array/list in a specific order.

Sorting is the process of arranging the elements of an array so that they can be placed either in ascending or descending order.

Insertion sort, merge sort, and bubble sort are stable. Heap sort and quick sort are unstable.

 Insertion Sort Algorithm

Insertion sort is a sorting algorithm that places an unsorted element at its suitable place in each iteration.

Insertion sort works similarly as we sort cards in our hand in a card game.

We assume that the first card is already sorted then, we select an unsorted card. If the unsorted card is greater than the card in hand, it is placed on the right otherwise, to the left. In the same way, other unsorted cards are taken and put in their right place.

 Algorithm: Insertion-Sort (A)

1  1.for j = 2 to A. length
2.  key = A[j]
3.  i = j – 1
4.  while i > 0 and A[i] > key
5.  A[i + 1] = A[i]
6.  i = i -1
7.  A[i + 1] = key

                   Insertion Sort Complexity

Best

O(n)

Worst

O(n2)

Average

O(n2)

Space Complexity

O(1)

Stability

Yes


 

 

 

 

 

 

 

 

 






Worst Case Complexity: O (n2)

Suppose, an array is in ascending order,  to sort it in descending order. this case refers as  worst case complexity 

Best Case Complexity: O (n)
when the array is already sorted, there is no swap and compressions 
. So, there is only n number of comparisons. Thus, complexity is linear.

Average Case Complexity: O (n2

It occurs when the elements of an array are in jumbled order (neither ascending nor descending).

 Heap Sort Algorithm

Heap Sort is a popular and efficient sorting . Learning how to write the heap sort algorithm requires knowledge of two types of data structures - arrays and trees.

The initial set of numbers that we want to sort is stored in an array

e.g. [10, 3, 76, 34, 23, 32] and after sorting, we get a sorted array [3, 10, 23, 32, 34, 76].

 What is Heap?

Heap is a special tree-based data structure. A binary tree is said to follow a heap data structure if it is a complete binary tree.

·        All nodes in the tree follow the property that they are greater than their children i.e. the largest element is at the root and both its children and smaller than the root and so on. Such a heap is called a max-heap. If instead, all nodes are smaller than their children, it is called a min-heap

Max-heap

To build a max-heap from any tree, we can thus start happifying each sub-tree from the bottom up and end up with a max-heap after the function is applied to all the elements including the root element.


 
Algorithm: Heapsort(A)

 
BUILD-MAX-HEAP (A)
for i = A.length downto 2
Exchange A [1] with A[i]
A.heap-size = A.heap-size - 1
MAX-HEAPIFY (A, 1)

Heap Sort Complexity

                                                                                                                          

Time Complexity

 

Best

O(nlog n)

Worst

O(nlog n)

Average

O(nlog n)

Space Complexity

O(1)

Stability

No

Bubble Sort

Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until they are in the intended order.

Working of Bubble Sort

Suppose we are trying to sort the elements in ascending order.

 First Iteration (Compare and Swap)

 1.  Starting from the first index, compare the first and the second elements.

2.  If the first element is greater than the second element, they are swapped.

3.  Now, compare the second and the third elements. Swap them if they are not in order.

4.  The above process goes on until the last element

 Bubble Sort Complexity

        

Time Complexity

 

Best

O(n)

Worst

O(n2)

Average

O(n2)

Space Complexity

O(1)

Stability

Yes

 Sorting in Linear Time

 Linear time sorting is a subset of sorting algorithms with a significant advantage. they can sort a given set of elements in linear time, the runtime increases linearly with the input size.

To determine the range, identify the minimum and maximum values of the input array.

 

Counting-Based Algorithms

1.  Counting-Based Sort: 

   Counting-Based is a non-comparative sorting algorithm. It counts the occurrence of each particular element in the input array and uses this information to determine the correct position of each element in the sorted output array. Counting-Based sort assumes that the input elements are integers or can be added to integers

     Counting Sort

     The counting sort algorithm works by first creating a list of         the counts or occurrences of each unique value in the list.     It then creates a final sorted list based on the list of counts.

    One important thing to remember is that counting sort can         only be used when of we know the range of possible                 values in the input beforehand.

 COUNTING-SORT (A, B, k)

            
COUNTING-SORT (A, B, k)

let C [0 … k] be a new array

for i = 0 to k

C[i] = 0

for j = 1 to A.length

C[A[j]] = C[A[j]] + 1

 

// C[i] now contains the number of elements equal to i.

for i = 1 to k

C[i] = C[i] + C [i – 1]

// C[i] now contains the number of elements less than or equal to i.

for j = A.length downto 1

B[C[A[j]]] = A[j]

C[A[j]] = C [A [j – 1]

              

Bucket Sort

Bucket sort is a comparison sort algorithm that operates on elements by dividing them into different buckets and then sorting these buckets individually. Each bucket is sorted individually using a separate sorting algorithm like insertion sort, or by applying the bucket sort algorithm recursively.

Bucket sort is mainly useful when the input is uniformly distributed over a range. For example, imagine you have a large array of floating point integers distributed uniformly between an upper and lower bound.


 BUCKET-SORT (A)

Let B[0 … n – 1] be a new array
n = A.length
For i = 0 to n – 1
   Make B[i] an empty list
For i = 1 to n
   Insert A[i] into list B[$\lfloor$𝑛.𝐴[𝑖]$\rfloor$]
For i = 0 to n – 1
   Sort list B[i] with insertion sort
Concatenate the lists B[0], B[1]; ………… ; B[n – 1] together in ord


Bucket Sort Complexity

Time Complexity

 

Best

O(n+k)

Worst

O(n2)

Average

O(n)

Space Complexity

O(n+k)

Stability

Yes

 

 

 

 

 

 

 





Radix Sort

 Radix Sort is a non-comparison-based sorting algorithm that sorts elements by their numbers or characters. It counts each number or sign in the elements from the least significant number to the most significant radical sorting assumes that the input elements are integers or strings.

1.  MSD (Most Significant Digit) Radix Sort: MSD Radix Sort is a variant of radix sort that starts sorting elements based on their most significant It recursively divides the elements into subgroups based on the value of the current number and applies MSD Radix Sort to each subgroup until all the numbers have been counted.

2.  LSD (Least Significant Digit) Radix Sort: LSD Radix Sort is another variant that starts sorting elements based on their least significant It recursively sorts the elements based on each number from rightmost to leftmost, producing a sorted result.




Algorithm: RadixSort(a[], n)

   
   // Find the maximum element of the list
   max = a[0]
   for (i=1 to n-1):
      if (a[i]>max):
         max=a[i]
    // applying counting sort for each digit in each number of the input list
   For (pos=1 to max/pos>0):
      countSort(a, n, pos)
      pos=pos*10

 

Searching Algorithms

 Searching algorithms are methods or procedures used to find a specific item or element within a collection of data.

 1.  Linear Search: In this simple algorithm, each element in the collection is sequentially checked until the desired item is found, or the entire list is traversed. It is suitable for small-sized or unsorted lists, but its time complexity is O(n) in the worst case.




2.  Binary Search: This algorithm is applicable only to sorted lists. It repeatedly compares the middle element of the list with the target element and narrows down the search range by half based on the comparison result. Binary search has a time complexity of O(log n), making it highly efficient for large sorted lists.




1.   BinarySearch(a, lowerbound, upperbound, val) //where ‘a’ is the given array, ‘lowerbound’ is the index of the first array element, ‘upperbound’ is the index of the last array element, and ‘val’ is the value to be searched. 
2.   Step 1: set begin = lowerbound, end = upperbound,
 
position = – 1  
3.   Step 2: repeat steps 3 and 4 while begin <=end  
4.   Step 3: set mid = (begin + end)/2  
5.   Step 4: if a[mid] = val  
6.   set position = mid  
7.   print position  
8.   go to step 6  
9.   else if a[mid] > val  
10.                  set end = mid – 1  
11.                  else  
12.                  set begin = mid + 1  
13.                  [end of if]  
14.                  [end of loop]  
15.                  Step 5: if position = -1  
16.                  print “element not found”  
17.                  [end of if]  
18.                  Step 6: exit  
 
 
1.   

SHARE

Milan Tomic

Hi. I’m Designer of Blog Magic. I’m CEO/Founder of ThemeXpose. I’m Creative Art Director, Web Designer, UI/UX Designer, Interaction Designer, Industrial Designer, Web Developer, Business Enthusiast, StartUp Enthusiast, Speaker, Writer and Photographer. Inspired to make things looks better.

  • Image
  • Image
  • Image
  • Image
  • Image
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment

4.Time Management

                                      Time Management   •        Effective time management in project management involves strategic plann...