2014
no image
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Hi there!  This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science.  When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. Each time that I prepared for an interview, I thought to myself "Why oh why hasn't someone created a nice Big-O cheat sheet?".  So, to save all of you fine folks a ton of time, I went ahead and created one.  Enjoy

Searching



Algorithm Data Structure Time Complexity Space Complexity


Average Worst Worst
Depth First Search (DFS) Graph of |V| vertices and |E| edges - O(|E| + |V|) O(|V|)
Breadth First Search (BFS) Graph of |V| vertices and |E| edges - O(|E| + |V|) O(|V|)
Binary search Sorted array of n elements     O(log(n)) O(log(n)) O(1)
Linear (Brute Force) Array      O(n) O(n) O(1)
Shortest path by Dijkstra,
using a Min-heap as priority queue
Graph with |V| vertices and |E| edges     O((|V| + |E|) log |V|) O((|V| + |E|)log |V|) O(|V|)
Shortest path by Dijkstra,
using an unsorted array as priority queue
Graph with |V| vertices and |E| edges    O(|V|^2) O(|V|^2) O(|V|)
Shortest path by Bellman-Ford Graph with |V| vertices and |E| edges    O(|V||E|) O(|V||E|) O(|V|)




Sorting



Algorithm Data Structure Time Complexity Worst Case Auxiliary Space Complexity


Best Average Worst Worst
Quicksort Array O(n log(n)) O(n log(n)) O(n^2)              O(n)
Mergesort Array O(n log(n)) O(n log(n)) O(n log(n))              O(n)
Heapsort Array O(n log(n)) O(n log(n)) O(n log(n))              O(1)
Bubble Sort Array O(n) O(n^2) O(n^2)               O(1)
Insertion Sort Array O(n) O(n^2) O(n^2)              O(1)
Select Sort Array O(n^2) O(n^2) O(n^2)              O(1)
Bucket Sort Array O(n+k) O(n+k) O(n^2)              O(nk)
Radix Sort Array O(nk) O(nk) O(nk)              O(n+k)


Data Structures



Data Structure Time Complexity Space Complexity

Average Worst Worst

Indexing Search Insertion Deletion Indexing Search Insertion Deletion
Basic Array O(1) O(n) - - O(1) O(n) - - O(n)
Dynamic Array O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n)
Singly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Doubly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Skip List O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n log(n))
Hash Table - O(1) O(1) O(1) - O(n) O(n) O(n) O(n)
Binary Search Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n)
Cartresian Tree - O(log(n)) O(log(n)) O(log(n)) - O(n) O(n) O(n) O(n)
B-Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Red-Black Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Splay Tree - O(log(n)) O(log(n)) O(log(n)) - O(log(n)) O(log(n)) O(log(n)) O(n)
AVL Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)


Heaps


Heaps Time Complexity

Heapify Find Max Extract Max Increase Key Insert Delete Merge
Linked List (sorted) - O(1) O(1) O(n) O(n) O(1) O(m+n)
Linked List (unsorted) - O(n) O(n) O(1) O(1) O(1) O(1)
Binary Heap  O(n) O(1) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(m+n)
Binomial Heap - O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n))
Fibonacci Heap - O(1) O(log(n))* O(1)* O(1) O(log(n))* O(1)


Graphs

Node / Edge Management Storage Add Vertex Add Edge Remove Vertex Remove Edge Query
Adjacency list O(|V|+|E|) O(1) O(1) O(|V| + |E|) O(|E|) O(|V|)
Incidence list O(|V|+|E|) O(1) O(1) O(|E|) O(|E|) O(|E|)
Adjacency matrix O(|V|^2) O(|V|^2) O(1) O(|V|^2) O(1) O(1)
Incidence matrix O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|E|)


Notation for asymptotic growth

letter  bound   growth
(theta) Θ                             upper and lower, tight                                      equal
(big-oh) O upper, tightness unknown less than or equal
(small-oh) o upper, not tight less than
(big omega) Ω lower, tightness unknown greater than or equal
(small omega) ω lower, not tight greater than

[1]  Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n). SO

[2]  f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).

[3]  Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.
In short, if algorithm is __ then its performance is __ 

algorithm performance
o(n) < n
O(n) ≤ n
Θ(n) = n
Ω(n) ≥ n
ω(n) > n

Will edit this post soon to make it more easily readable...

Blog Post Credits - Pawan Tiwari(Microsoft SDET US)
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Hello friends, today I'm going to talk about factorial of Negative Numbers.

In mathematics, factorial of a positive Integer say n is given as 

where '*' is multiplication sign.

you know 0! is taken as 1.
Factorial Image

Factorial of a positive integers have great applications in Programming and mathematics. Permutations and combinations is one (where we find the number of ways a group of objects can be arranged or selected in given group of objects). Expansion of terms like (a+b)^n (binomial theorem) are computed using factorials.

We'll discuss about how to find factorials of very large numbers like say 100! or 1000! in programming in my upcoming posts.

Well, let's discuss about factorials about negative numbers here...
 
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Hello Friends, I'm gonna about Graph Theory in my upcoming posts, so here are some basic Graph terminology which will form a base to understand Graph Theory and Algorithms.

 

Overview

  • Definitions: Vertices, edges, paths, etc
  • Representations: Adjacency list and adjacency matrix



Definitions: Graph, Vertices, Edges

  • Define a graph G = (V, E) by defining a pair of sets:
    1. V = a set of vertices
    2. E = a set of edges

  • Edges:
    • Each edge is defined by a pair of vertices
    • An edge connects the vertices that define it
    • In some cases, the vertices can be the same

  • Vertices:
    • Vertices also called nodes
    • Denote vertices with labels

  • Representation:
    • Represent vertices with circles, perhaps containing a label
    • Represent edges with lines between circles

  • Example:
    • V = {A,B,C,D}
    • E = {(A,B),(A,C),(A,D),(B,D),(C,D)}




Motivation

  • Many algorithms use a graph representation to represent data or the problem to be solved

  • Examples:
    • Cities with distances between
    • Roads with distances between intersection points
    • Course prerequisites
    • Network
    • Social networks
    • Program call graph and variable dependency graph

no image
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
  • C++ I/O. Note: Be very careful when you use getline after using cin! The first getline may return an empty line (why?) 
  • Another I/O tutorial by Yury Kholondyrev (warmingup).
  • C++ STL (vectors, lists, sets, maps, pairs) and algorithms. Part 1, Part 2, and Part 3.
  • Graph algorithms (BFS, colored DFS, shotest paths, union/find, spanning trees, Euler paths, flow and matching). Part 1, Part 2, Part 3, Part 4, and Part 5.
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Someone asked me to answer this question on Quora, and I got 200+ upvotes. So, I thought I should share this here on my blog also.

CLRS image


Well, you should read this book from Start to end (From Basics to Masters).

Here are my suggestion on How to read this Book.

Reading and Understanding CLRS:

1. Use video lectures to understand the concept, and read the chapter from the book.
Best site for CLRS lecture videos :
Lecture 1: Administrivia, Introduction, Analysis of Algorithms, Insertion Sort, Mergesort

Video Lectures | Introduction to Algorithms (SMA 5503) | Electrical Engineering and Computer Science | MIT OpenCourseWare
no image
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Hello friends, Today I'm sharing some contest templates created by World Class Coders, I will add some more and I'm creating one for C++ as well. You should use this during competitions it will really help and increase your speed, you don't have to type too much..

For C++...

Created by Zobayer

 #include <cassert>  
 #include <cctype>  
 #include <cmath>  
 #include <cstdio>  
 #include <cstdlib>  
 #include <cstring>  
 #include <climits>  
 #include <iostream>  
 #include <sstream>  
 #include <iomanip>  
 #include <string>  
 #include <vector>  
 #include <list>  
 #include <set>  
 #include <map>  
 #include <stack>  
 #include <queue>  
 #include <algorithm>  
 #include <iterator>  
 #include <utility>  
 using namespace std;  
 template< class T > T _abs(T n) { return (n < 0 ? -n : n); }  
 template< class T > T _max(T a, T b) { return (!(a < b) ? a : b); }  
 template< class T > T _min(T a, T b) { return (a < b ? a : b); }  
 template< class T > T sq(T x) { return x * x; }  
 template< class T > T gcd(T a, T b) { return (b != 0 ? gcd<T>(b, a%b) : a); }  
 template< class T > T lcm(T a, T b) { return (a / gcd<T>(a, b) * b); }  
 template< class T > bool inside(T a, T b, T c) { return a<=b && b<=c; }  
 template< class T > void setmax(T &a, T b) { if(a < b) a = b; }  
 template< class T > void setmin(T &a, T b) { if(b < a) a = b; }  
 #define MP(x, y) make_pair(x, y)  
 #define REV(s, e) reverse(s, e)  
 #define SET(p) memset(p, -1, sizeof(p))  
 #define CLR(p) memset(p, 0, sizeof(p))  
 #define MEM(p, v) memset(p, v, sizeof(p))  
 #define CPY(d, s) memcpy(d, s, sizeof(s))  
 #define READ(f) freopen(f, "r", stdin)  
 #define WRITE(f) freopen(f, "w", stdout)  
 #define ALL(c) c.begin(), c.end()  
 #define SIZE(c) (int)c.size()  
 #define PB(x) push_back(x)  
 #define ff first  
 #define ss second  
 #define i64 __int64  
 #define ld long double  
 #define pii pair< int, int >  
 #define psi pair< string, int >  
 const double EPS = 1e-9;  
 const double BIG = 1e19;  
 const int INF = 0x7f7f7f7f;  
 int main() { 

//READ("in.txt");  
   //WRITE("out.txt");  
   return 0;  
 }   


A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
You are probably interested in this post because it touches two complicated areas of programming, Bit Manipulations &Recursion. Indeed it does. I will dig deep into these. Bear with my speed and be patient, you will get it. The best way would be to take a problem and follow through it. There are chances that you might have already heard about this problem but I will try to follow an approach which hopefully will highlight some new things about the problem.

So first the problem. Lets say your friend comes to you and puts a challenge across, Programming Interviews: Implement adding two unsigned numbers without using "+" or "++"?. You being a problem solver, accept the challenge & try to work on a solution. You think, Is there a possibility that by using any in-built operators other then '+', you will be able to solve this? Probably. So you try various operator '-', '*', '%' etc. You put all your effort into it but solution remains elusive. You think that there is no harm in asking for help to get some pointers. Absolutely, no harm. You approach another nerdy friend to get some help on the problem. After seeing the problem, what do you think would be his first response? Think... Well, it is highly likely (if he is a nerd) that he will ask you this question, "Can you try bit operators?". It looks like your friend has taken quite a journey. He asked you this question because he thinks that if a in-built language operator is not an option, can we do what computer itself would be doing? Use bit operators.
So you get a cue. You have some basic idea about bit manipulation. You start working on the program using bit operations. You look at bit array associated with two numbers which need to be added & do some mental wrestling:

19 = 10011
29 = 11101

Since you are aware of bit operations, what is the first thing which crosses your mind? Take a moment & think. May be, you look at the whole binary representation of each number & think that adding bits is simple but how to handle carry from right to left? If you considered the ENTIRE string and thought about moving carry across, there is a problem. The issue is that you are not breaking problem into smaller sub problems. What instead you should have done is this (the last one is important):
no image
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Hello Friends, I had written a blog post on Quora on Bitwise tricks...So I thought I should share those tricks here on the my Blog also.

Here are some cool Bitwise operations, that you may not be knowing!!!

Convert letter to lowercase
  • OR by space => (x | ' ') 
  • Result is always lowercase even if letter is already lowercase 
  • eg. ('a' | ' ') => 'a' ; ('A' | ' ') => 'a'

Convert letter to uppercase
  • AND by underline => (x & '_') 
  • Result is always uppercase even if letter is already uppercase 
  • eg. ('a' & '_') => 'A' ; ('A' & '_') => 'A'

Invert letter's case: 
  • XOR by space => (x ^ ' ') 
  • eg. ('a' ^ ' ') => 'A' ; ('A' ^ ' ') => 'a'

Letter's position in alphabet: 
  • AND by chr(31)/binary('11111')/(hex('1F') => (x & "\x1F") 
  • Result is in 1..26 range, letter case is not important  
  • eg. ('a' & "\x1F") => 1 ; ('B' & "\x1F") => 2

Get letter's position in alphabet (for Uppercase letters only): 
  • AND by ? => (x & '?') or XOR by @  => (x ^ '@')  
  • eg. ('C' & '?') => 3 ; ('Z' ^ '@') => 26 

Get letter's position in alphabet (for lowercase letters only): 
  • XOR by backtick/chr(96)/binary('1100000')/hex('60')  => (x ^ '`')   
  • eg. ('d' ^ '`') => 4 ; ('x' ^ '`') => 25 

Note: using anything other than the english letters will produce garbage results
no image
A Blog about Programming,Algorithms,Coding Competitions, and Discrete Mathematics!
Visualizations developed by : David Galles 

The  best way to understand complex data structures is to see them in  action.  We've developed interactive animations for a variety of data  structures and algorithms.  Our visualization tool is written in Java  using Swing, and runs well under OS X, most flavors of Linux, and most flavors of Windows.

Currently, we have visualizations for the following data structures and algorithms:

  • 1. Stacks (both array and linked list implementations)
  • 2. Queues (both array and linked list implementations)
  • 3. Lists (both array and linked list implementations.  Visitors are   also demonstrated)
  • 4. Binary Search Trees
  • 5. AVL Trees
  • 6. B-Trees
  • 7.Heaps
  • 8.Binomial Queues
  • 9.Hash Tables
    • -Separate Chaining (Open Hashing, Closed Addressing)
    • -Closed Hashing (Open Addressing) -- including linear probling, quadratic probing, and double hashing.
    • -Both integers and strings as keys (with a nice visualziation of   elfhash for strings)
  • 10.Sorting Algorithms
    • -Bubble Sort
    • -Selection Sort
    • -Insertion Sort
    • -Shell Sort
    • -Merge Sort
    • -Quck Sort
    • -Heap Sort
    • -Counting Sort
    • -Bucket Sort
    • -Radix Sort
  • 11.Huffman Coding
  • 12.Graph Alogrithms
    • -Dijkstra's Algorithm
    • -Prim's Algorithm
    • -Kruskals Algorithm (including a visualization of disjoint sets)
    • -Breadth-First Search
    • -Depth-First Search
    • -Connected Components
    • -Topological Sort
    • -Floyd-Warshall (all pairs shortest paths)
  • 13. Dynamic Programming
    • -Calculating nth Fibonacci number
    • -Making Change