Last active
March 26, 2024 19:24
-
-
Save ZapDos7/15af6a5596fa20861aaf4933e9272ccb to your computer and use it in GitHub Desktop.
- elements not sorted at contiguous (=adjacent) memory locations
- methods:
insert,delete,get_size,search_iterative,search_recursive - elements linked using pointers:
A[ data | next*] ----> B[ data | next*] ----> C[ data | next*] ...
HEAD
- not fixed size
- expensive insertion of new element
- random access is not allowed
- requires extra memory for the pointer
- is not cache friendly (no locality of reference)
- has a
previouspointer - can be traversed both ways
- quicker insert
- more efficient deletion
- but extra space for second pointer
- more maintainance for more
- no null at the end (can be singular or doubly linked)
- any node can be starting point
- implementation of queue (doesn't need 2 pointers for
start&end, onlyendsincestart = end -> next)) - used when CPU cycles many times on the same data pointers
- methods (all are
O(1)):push(add to start - if full, overflow)pop(remove item in the reverse order as pushed - if empty, underflow)peek/top(return top element)isEmpty
- implementation with array:
- easy to implement, saves memory (no pointers needed)
- BUT not dynamic (doesn't shrink or grow depending on runtime needs)
- implementation with linked list:
- more memory needed
- BUT dynamic
- a tree with 2 children (left & right)
- hierarchical data structures
- topmost = root
- can be used e.g. for filesystems
- quicker access
- moderate insertion/deletion
- no upper limit of number of nodes
- maximum number of nodes on level
1: 2i - maximum number of nodes on height
h: 2h-i - for
nnodes, minimum possible height = log2 (n+ 1) - for
lleaves, least possible number of levels = log2(l) + 1
- full BT: all nodes have 0 or 2 children
- complete BT: all levels completely filled except possibly last level, with keys as left as possible
- perfect BT: all internal nodes have 2 children all leaved same level, if height
h, it has 2h- 1 nodes - balanced BT:
- height is O(log
n) wherenthe number of nodes - e.g. AVL, red-black
- good because O(log
n) search, insert, delete
- height is O(log
- degenerate/pathological: every internal node has 1 child, basically a linked list
- a binary tree where a node's left subtree's nodes are smaller than the node's key, the right subtree's nodes are larger than it and both subtrees are also BST
- used in binary search
- methods:
- insert
- always at a leaf node, we search a key from root until leaf and then add
- delete
- if a node is leaf, merely delete
- if it has 1 child, copy child to node & delete child
- if it has 2 children, find inorder successort, copy it to node, delete inorder successor
- insert
A HashTable has Θ(1) search, insert, delete A self balancing BST has O(logN) search, insert, delete (e.g. RedBlack, AVL)
So why choose BST?
- Keys are in sorted order by InorderTraversal of BST (not natural in HT)
- When order statistics, closest lower/greater element or range queries, BST > HT
- Easier to implement (usually needs libraries for hash functions)
- self balancing BSTs are O(logn) whereas hashing is Θ(1) but with high cost for table resizing etc
- Types:
- max-heap: ascending order
- min-heap: descending order
- uses:
- heapsort
- priority queues
- binomial heaps and fibonacci heaps
O(n) heapify
# Method input: A
heapsize := size(A)
for i := floor (heapsize/2) downto 1 // whole loop: O(nlogn)
do HEAPIFY(A,i) // this line: O(log(n))
endfoor
| Structure | is good for... |
|---|---|
| Trees | Ranged search |
| Hash tables | equality checking, ID based search |
| Heaps | retrieve k min or max values of dataset |
- This FCC article
- Notes from my studies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment