C++ Interview Preperation
Thanks to my C++ Guru -
Scott Meyer
Difference between Pointer and Reference
Pointer
|
Reference
|
A pointer can be
re-assigned any number of times
|
referencecan not be
re-seated after binding.
|
It can point nowhere
(NULL),
|
reference always
refer to an object.
|
A pointer has its
own memory address and size on the stack
|
reference shares the
same memory address
|
You can have
pointers to pointers to pointers offering extra levels of indirection
|
references only
offer one level of indirection.
|
A pointer needs to
be dereferenced with * to access the memory location it points to
|
A pointer needs to
be dereferenced with * to access the memory location it points to
|
Difference between Array and Vector
Array
|
Vector
|
The size of
std::array is fixed at compilation time, and no extra memory is used besides
the storage of the array elements.
|
The size of
std::vector can change dynamically as the program executes, and you pay for
this flexibility with a size overhead of 3 pointers versus std::array.
|
Access time is
identical.
|
|
Array implements a
compile-time non-resizeable array.
|
Vector implements an
array with fast random access and an ability to automatically resize when
appending elements.
|
Once a chunk of
memory is allocated for an array , its size will remain fixed unless you
reallocate or free the memory
|
Size of vector is
not fixed .When new elements are inserted, if the new size of the vector
becomes larger than its capacity, reallocationoccurs. This typically
causes the vector to allocate a new region of storage, move the previously
held elements to the new region of storage, and free the old region.
|
are a builtin
language construct;
|
std::vector:
is a template class;
|
their size must be a
compile-time constant unless they are allocated dynamically;
|
is implemented
as a dynamic array;
grows and shrinks dynamically; |
while std::array has
a fixed length at compile time,
|
std::vector resizes
itself,
|
Difference between Vector and List
Vector
|
List
|
· Contiguous
memory.
|
· Non-contiguous
memory.Implemented as doubly linked list
|
· Pre-allocates
space for future elements, so extra space required beyond what's necessary
for the elements themselves.
|
· No
pre-allocated memory. The memory overhead for the list itself is constant.
|
· Each
element only requires the space for the element type itself (no extra
pointers).
|
· Each
element requires extra space for the node which holds the element, including
pointers to the next and previous elements in the list.
|
· Can
re-allocate memory for the entire vector any time that you add an element.
|
· Never
has to re-allocate memory for the whole list just because you add an element.
|
Insertions at the
end are constant, amortized time, but insertions elsewhere are a costly O(n).
Erasures at the end
of the vector are constant time, but for the rest it's O(n).
|
· Insertions
and erasures are cheap no matter where in the list they occur.
|
· You
can randomly access its elements.
|
· You
cannot randomly access elements, so getting at a particular element in the
list can be expensive.
|
· Iterators
are invalidated if you add or remove elements to or from the vector.
|
· Iterators
remain valid even when you add or remove elements from the list.
|
· You
can easily get at the underlying array if you need an array of the elements.
|
· If
you need an array of the elements, you'll have to create a new one and add
them all to it, since there is no underlying array.
|
vector is an
array with automatic memory management. The data is contiguous in memory.
Trying to insert data in the middle is a costly operation.
|
In a list the data
is stored in unrelated memory locations. Inserting in the middle doesn't
involve copying some of the data to make room for the new one.
|
Difference between Vector and Map
Vector
|
Map
|
Vector is a sequence
container .
|
Map is associative container.
|
Vectors are
basically just arrays., std::vector is a dynamic array
|
Maps are usually
implemented as binary search trees,
|
Vectors are
contiguous
|
Maps do tend to
have a bunch of small allocations all over the heap,
|
Vector are more
cache friendly, however unless you sort it you'll have O(N) performance on
find
|
Map look up and
retrieval take O(log N), but the items may be scattered throughout the
memory,
|
A vector stores
the data in an internal array that it resizes and you add more elements.
|
|
Indexes in a vector are
continuous,
|
|
Values in a vector are
guaranteed to be in continuous memory,
|
In a map they
don't have to be
|
std::vector
associates an index (which is always an integer) with a value.
|
not in a map.
|
Finding an item in a
very small vector can easily be faster
|
std::map associates
a key with a value. The key is customizable and std::map will automatically
insert a key-value pair for any element accessed with operator[].
|
Difference between Map and Set
Map
|
Set
|
std::set and std::map are associative
containers
|
std::set and std::map are associative
containers
|
std::map there
is an associated value , that is if A -> B , then map[A]=B ,
this works like hashing but not O(1) , instead O(log
N).
|
The difference is
that std::setscontain only the key,
|
std::set keeps
data in sorted format .
Implementation of both is done by balanced trees (like AVL or Red-Black trees ) giving O(logN)time complexity. |
|
But important point
to note is that both can store unique values
|
But important point
to note is that both can store unique values
|
C++ Maps are sorted
associative containers that contain unique key/value pairs
|
The C++ Set is an
associative container that contains a sorted set of unique objects.
|
Difference
between Mutex and Semaphore
Mutex
|
Semaphore
|
mutex and semaphore
are kernel resources that provide synchronization services
|
mutex and semaphore
are kernel resources that provide synchronization services
|
A mutex provides
mutual exclusion, either producer or consumer can have the key (mutex) and
proceed with their work. As long as the buffer is filled by producer, the
consumer needs to wait, and vice versa.
|
A semaphore is a generalized
mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB
buffers (identical resources). A semaphore can be associated with these four
buffers. The consumer and producer can work on different buffers at the same
time.
|
Mutexes are
typically used to serialise access to a section of re-entrant code that
cannot be executed concurrently by more than one thread. A mutex object only
allows one thread into a controlled section, forcing other threads which
attempt to gain access to that section to wait until the first thread has
exited from that section.
|
A semaphore
restricts the number of simultaneous users of a shared resource up to a
maximum number. Threads can request access to the resource (decrementing the
semaphore), and can signal that they have finished using the resource
(incrementing the semaphore
|
A mutex is really a
semaphore with value 1.
|
|
amutex
is locking mechanism used to synchronize access to a resource. Only
one task (can be a thread or process based on OS abstraction) can acquire the
mutex. It means there is ownership associated with mutex, and only the owner
can release the lock (mutex).
|
Semaphore
is signaling mechanism (“I am done, you can carry on” kind of
signal). For example, if you are listening songs (assume it as one task) on
your mobile and at the same time your friend calls you, an interrupt is
triggered upon which an interrupt service routine (ISR) signals the call
processing task to wakeup.
|
Mutex provide the
ownership to the thread. It means that a thread which has locked mutex will
allowed to release it. No other thread can release it.
|
On the other hand
the Semaphore has no such ownership concept. It can be other process or
thread just by incrementing the count.
|
mutex allows
only one thread to access the Critical section
|
whereas a semaphore
with the exception of a binary semaphore allows multiple access to shared
resources.
|
In mutexes, only the
thread that took the lock has the key. Only that thread alone can release the
lock. So it has an ownership property.
|
Binary semaphores
doesn't have an ownership property, as any thread can take the key to open
the lock.
|
Difference
between Mutex and ConditionVariable
Mutex
|
Condition Variable
|
A mutex is the basic
tool enabling threads to cooperate on access to shared variables. It is a
locking mechanism
|
Condition variables
allow a set of threads to sleep until awoke. Another thread can awake a
sleeping thread by using conditional variable.
|
You can assume mutex
is of lock of a door of a house(only one key is available to enter). You need
to get the key to enter into the house. Conditional variable facilitates the
mechanism that if you don’t have the key, you sleep next to the door of the
house. Someone will wake you up when key is available.
|
|
A mutex is a simple
locking mechanism. When there are multiple threads or multiple processes, and
they each need access to a resource at various times, they simple ‘lock’ the
mutex, go about their business with the resource, and then unlock the mutex.
As long as no one wants the resource at the same time, everything runs fine.
However if one process or thread tries to lock the mutex while another
already has it locked, the thread or process will wait (and give up use of
the processor) until the mutex is unlocked, at which point, the first thread
or process will be given an opportunity to run again and lock the mutex.
|
A condition variable
(in conjunction with a mutex) allows for more complex coordination
between threads. It can be used when one thread is responsible for
doing some work, but it must wait for another thread to accomplish a
particular task and put things in a proper state, or make data available for
the first thread to proceed. A thread locks the mutex, determines if
‘conditions’ are correct for it to do its work, and if not, it will wait on
the condition variable. This wait action results in releasing the associated
mutex (so another thread can lock it and work on making the condition true)
and then the thread waits, until the condition variable is signaled. When
signaled, the waiting process’s wait is ended, and it is given the mutex
again (i.e. locked) so that it can processed, (or if the condition is still
not quite sufficient, wait again.)
|
A good example where
a condition variable might be used is in some type of message passing where a
thread waiting to process a message finds no message, so it waits on the
condition variable. When another thread sends a message, it can signal the
condition variable to allow another thread to process the message.
|
Difference
between Map and unordered_map
map
|
unordered_map
|
Increasing order by
default.
|
No ordering.
|
Implementation –
Self balancing BST like Red Black Tree
|
Implementation –
Hash Table
|
Search Time – log(n)
|
Search Time –
O(1) – Average
O(n) – Worst case
|
Insertion Time – log(n)
+ Rebalance
|
Insertion Time –
Same as search.
|
Deletion Time – log(n)
+ Rebalance
|
Deletion Time – Same
as search
|
Memory usage is more
in unordered_map as compared to map because unordered_map need space for
storing hash table too.
|
Memory usage is more
in unordered_map as compared to map because unordered_map need space for
storing hash table too.
|
Comments
Post a Comment