Saturday, 19 April 2025

How to approach System design interview questions (HLD)

 Background

Design questions are an integral part of today's IT interview process. Design questions can generally be categorized into
  1.  High-level design (HLD) - High-level architecture design of a system
  2.  Low-level design (LLD) - Low-level class / Component design of the system

In High-level design, you would typically get a question to design a system like YouTube, Google Maps, or an e-commerce website, and you would provide the high-level architecture of the system, starting from the client all the way to backend systems that would process and return the request. Focus would be to solve functional use cases, but also address non-functional ones like scalability, availability, performance, etc.

In Low-level design, however, you will focus more on writing classes or components to solve the problem statement. For eg, let's say you are designing a Chess game, then you need to design a class for various components like the Board, Player, etc., along with methods/variables that will be used to implement interactions effectively. Focus would be on implementing the solution, but also sticking to design principles like SOLID.

In this post, we will look at what are some common things you look at in terms of approaching a high-level design question. We will look at a specific design question and see how we approach it based on various aspects we discuss here.





How to approach System design interview questions

1. Summarize the requirements of the System and clarify open questions

The 1st thing you should think about once you know the problem statement is to summarize the requirements of the System (functional and non-functional) and clarify open questions. Once the requirements are summarized, you can check with the interviewer if the list looks ok, if there is anything specific the interviewer wants to discuss, we add it to our list of requirements. This is very important to build the foundation of how you approach the question. This will help you with the following
  1. Understand all aspects of the problem that you are trying to solve
  2. You can refer to these when you are actually building your system to ensure you have addressed all the requirements you noted earlier.
  3. This shows you can systematically address the question and not jump to solutions.
Make sure when you are summarizing functional and non-functional requirements, also mention why you think a requirement is important.

Let's see some examples
  1. Let's say you are designing a YouTube video upload and viewing service. 
    1. Functional requirements can be 
      1. The user should be able to upload the video
      2. The user should be able to specify video metadata like title, description, thumbnail, etc.
      3. The user should be able to view the video
      4. The user should be able to generate the caption, etc.
    2. Non-functional requirements can be
      1. The system should be available & fault-tolerant (no downtime)
      2. The system should be able to scale with load
      3. The system should be performant (Less video processing time, less rendering time, etc.)
      4. The system should be durable (Video once uploaded should be persisted)
With summarization, make sure you ask open questions, e.g., 
  1. Do we want to support video streaming?
  2. What is the expected system load like? Designing a system for a global user base will require a highly scalable system.
  3. Do we want to consider front-end system design (UX experience) or just the backend design, etc?

2. Take time to think about your high-level approach and design

Once you clarify the requirements, you should take some time to think through what your approach should be to implement a design that solves the problem at hand (Do not jump to giving random suggestions - that's a red flag).

A few things you can think of
  1. What components will be involved in the design (API gateway, web services, persistent storage options, event-driven communication, etc), and how will they interact with each other?
  2. Go through your non-functional requirements and see what is needed to handle those. E.g., for scalability, you might need auto-scaling infrastructure, for availability, you might need a cluster of DB nodes, for performance, you might need in-memory caches, etc.
  3. You can also think about aspects like what APIs are needed, what API structure will look like, what are expected input and output of APIs are. If you have to store information, think about what the DB schema will look like.
  4. Think about what DB you need - Relational or non-relational, etc.
Standard components that you might need
  1. API Gateway: API gateway is probably the most common component in any architecture. API gateway is used for rate limiting, authentication, response caching, and data transformations.
  2. Microservices node cluster: Modern systems use microservices rather than a monolithic architecture. You would typically want autoscaling enabled so that you can scale the microservice nodes on demand. Instead of actual server nodes, you could have a serverless architecture like AWS Lambda.
  3. Storage: You need storage options. It could be a database(Relational or non-relational), storage service like S3, etc. 
  4. Event-driven system: If you need an asynchronous workflow, then you need to decouple your microservice nodes from the component that will process your request, eg, image processing or video processing, which are expensive operations. For this, you can have a message queue system like Active MQ, AWS SQS, or Kafka.
  5. Caching system: To improve performance, you might want to have some in-memory caching system like Redis, memcache, etc.
  6. Cron jobs/ batch infra: For use-cases where you need a periodic job to run, you can use a cron-based scheduler or batch job infrastructure.
  7. Monitoring/Analytics systems: Depending on use cases, you might also need a monitoring setup like monitoring system health,  system load, security alerts in cases like a DOS attack, etc. There could be an analytics requirement like monitoring logs, user access patterns, etc. Eg., AWS CloudWatch.
  8. CDN: Content delivery network or CDN is a set of servers at edge locations (near user locations) that are responsible for serving static content like CSS, images, etc., to reduce the latency.
Concepts that you need to consider for the design
  1. Authentication: As we saw above API gateway will handle authentication. You can have various authentication mechanisms like OAuth 2.0, JWT, etc.
  2. Relational vs No-SQL DB: You need to decide based on system requirements whether you need a relational DB like SQL Server or a non-sql DB like MongoDB or maybe something else. You can follow below general guidelines below to decide
    1. If you want to scale, like if you are dealing with big data, you would generally want to go to NoSQL, as relational DB might be difficult to scale. You can argue that you can have sharding to scale the DB, and while that is a valid point, it becomes difficult to scale relational DB after a threshold.
    2. If you do not have a fixed schema, then again, NoSQL is the way to go.
    3. If you need low latency, then again, NoSQL is the way to go.
    4. If you need ACID properties, then you should use a relational DB. For eg, if you are designing a payment system, you should use a relational DB because you need strong consistency. If you are ok with eventual consistency, you can go with NOSQL.
    5. If your access queries and patterns are not fixed, then NoSQL is probably a good choice; else if it is fixed relational DB might work out.

  3. CAP theorem: The CAP theorem, also known as Brewer's theorem, states that a distributed system can only guarantee two out of three properties: Consistency, Availability, and Partition Tolerance. This means that designers must make trade-offs when building distributed systems, prioritizing the most important guarantees for their specific needs. 
  4. ACID properties: ACID is an acronym that stands for atomicity, consistency, isolation, and durability (ACID). Together, ACID properties ensure that a set of database operations (grouped together in a transaction) leave the database in a valid state even in the event of unexpected errors. Further, ACID transactions provide the level of transactional guarantees that many regulatory agencies require.
  5. DB concepts: If you do get into DB architecture discussions, you might need the following concepts handy
    1. Sharding
    2. Paritioning
    3. Indexes
    4. Batch writes
    5. Read replicas
  6. API design: You might have to design the APIs as well, for which you need to consider:
    1. API type (Socket communication / REST API's)
    2. Versioning
    3. Pagination
    4. API protocol (HTTP / HTTPS)
    5. API timeout 
    6. Rate limiting, etc.
  7. UX concepts
    1. Server-side rendering vs client-side rendering: (If SEO, Search engine optimization, is a use case, then we need server-side rendering as 1st render should have all data, which is not the case with client-side rendering)
    2. Debouncing: Call the API or a function after a deal, so that if a user mistakenly clicks the submit button 3/4 times call is made only once. 

3. Implement your design

Once you have thought about your design, you can start implementing it. You can do this over a whiteboard or a Word doc (whatever is available). You should consider following at this stage
  1. While suggesting a component, also give a reason why you think the component is suggested, and what use case it solves. Be as vocal as you can and constantly communicate your thought proceess.
  2. You can expect probing questions that will be asked to make sure you understand the depth of the solution you are suggesting. So, do not suggest anything that you might have heard but are not sure how it actually works, and help solve the user case at hand. You should be confident enough to back up your answers.
  3. Take pauses after periodic discussion to check if the interviewer has any questions about what has been suggested/discussed so far. 
  4. Be prepared for course corrections. It can happen that the interviewer asks what you will do if the requirement changes, you should be able to course correct your approach to align with the new temporary goal set. This is again to understand that you can do necessary course corrections when needed.

4. Summarize your solution

Once implementation is done, go through your initial notes on functional and non-functional requirements and summarize how your system design handles them. 

NOTES:
  1. Design discussion is all about open discussion, unlike problem solving, where you have a specific solution/ direction for the problem.
  2. There can be more than one solution to the problem, so just be honest, confident, and vocal about your answers. Justify and back up your answers.
  3. Never take assumptions or jump to solutions. Clarify all the questions upfront. If you are suggesting something based on some assumption, mention it upfront as well.
  4. Do not propose or suggest anything that you are not aware of or cannot justify. If you suggest something but cannot justify it, it will be considered a red flag. It's ok to say if you are not aware of something specific, like how DB indexes work or how sharding works. 

Related Links

Friday, 11 April 2025

Understanding Global Interpreter Lock (GIL) in Python

 Background

If you have been using Python then you must have come across the term GIL or the fact that Python is primarily a single-thread language. In this post, we will see what this Global Interpreter Lock (GIL) in Python is.



Understanding Global Interpreter Lock (GIL) in Python

The GIL or Global interpreter lock is a lock or a mutex that allows only one thread to hold the control of the Python interpreter. This means that at a time only one thread can be running (You cannot use more than one CPU core at a time). 

This essentially prevents Python's internal memory from being corrupted. GIL ensures there are no dangling pointers and memory leaks. Think of this as the equivalent of each object in Python requiring a lock before accessing them, if you do this on your own this might cause deadlock which is prevented with GIL but it essentially makes it single-threaded.

However, note that even with GIL there can be race conditions because all operations will not be atomic. So let's say we have a method that takes in an object and appends it to the end of an array. Thread 1 can go inside the method, read the index that it needs to add the new object to, and then suspend(release GIL) before it can be inserted. Thread 2 gets GIL and does the same for other object and suspends. Now when thread 1 comes back it will update the same index thread 2 wrote to which will overwrite the data and cause race condition.

How does GIL prevent memory corruption? 

Python does memory management by keeping a count of references of each object. When this count reaches 0 memory help by that object is freed. 

Let's see an example of this
import sys

a = ["A", "B"]
b = a
print(f"References to a : {sys.getrefcount(a)}")

The output is: References to a : 3
It is 3 because one reference is a , 2nd reference is b and 3rd reference is locally created when it is passed in sys.getrefcount method. When this reference reaches 0 then the memory associated with this list object will be released.

Now if multiple threads were allowed then we could have two threads simultaneously increasing or decreasing the count. This can lead to
  1. There are no actual references to the object but the count is 1 due to race condition. This is essentially a memory leak i.e. the object is not referenced but cannot be garbage collected as well due to bad reference count.
  2. There are still references to objects but the count is 0 due to race conditions and Python frees the object memory. This will lead to a dangling pointer.
One way to handle the above case is to lock each object in Python before its reference is updated. But as we know locking comes with drawbacks like deadlock, so if two threads are waiting for the lock deadlock will happen. This can also impact the performance as the threads will frequently acquire and release locks.

The alternative is the GIL - single lock on the interpreter itself. So any thread that needs to execute any code needs to acquire GIL to be able to run the code via interpreter. This prevents deadlocks but essentially makes Python single-threaded.


NOTE: Many potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. 

Related Links

Monday, 31 March 2025

Understanding Bisect module in Python

 Background

You would have often come across a use case of finding an element in a sorted list. Typically you would use a binary search which takes O(logN). Python provides a module that does that for us - it is called bisect. The methods in this API allow us to find a position in the sorted list where a new element can be inserted to keep the lost sorted. In this post, we will explain how to use this module.


Understanding Bisect module in Python

Let's try to see the method bisect_left that gives us a position in a sorted list where a new element can be inserted to keep the list in sorted order. The syntax is
  • bisect_left(list, element , low, high)
The arguments are
  • list - a sorted list where we want to find the element to be inserted
  • element - The element to be inserted
  • low - the starting index of the list from where we want to start the search
  • high - the ending index of the list from where we want to end the search
See the "Related links" section at the end for the official documentation link.

Now let's see an example
import bisect

list = [1, 3, 5, 9, 10 ,15]
idx = bisect.bisect_left(list, 6)
print(f"Index to insert 6 is {idx}")
list.insert(idx, 6)
print(f"New list {list}")

Above code prints: 
Index to insert 6 is 3
New list [1, 3, 5, 6, 9, 10, 15]

You can also do both of the steps shown above - getting the index to be inserted for maintaining sorting and inserting it into the list in a single API provided by bisect. The API is
  • insort_left(list, element)
Let's see how we can use above API to achieve the same result
import bisect

list = [1, 3, 5, 9, 10 ,15]
bisect.insort_left(list, 6)
print(f"New list {list}")

Output is: New list [1, 3, 5, 6, 9, 10, 15]

NOTE: bisect_left as the name suggests gives the left-most possible index to insert, whereas there is a similar API called bisect_right that gives the right-most possible index to insert. Similarly to insort_left we also have insort_right  that does in place insertion of given element in given list.

Let's see an example of the above
import bisect

list = [1, 3, 5, 5, 5, 9, 10 ,15]
idx = bisect.bisect_left(list, 5)
print(f"Left most index to insert 5 is {idx}")
idx = bisect.bisect_right(list, 5)
print(f"Right most index to insert 5 is {idx}")

Above code prints:
Left most index to insert 5 is 2
Right most index to insert 5 is 5


NOTE: Time complexity is O(logN) which is the same as that of binary search.


We can actually use this API for multiple use-cases, let's see them below

1. Binary Search

As we say above you can use bisect to implement binary search.
import bisect
def binary_Search(arr, element, low, high):
    idx = bisect.bisect_left(arr, element, low, high)
    if idx < len(arr) and arr[idx] == element:
        return idx
    return -1


arr = [1, 3, 5, 7, 12, 20]
search_idx = binary_Search(arr, 7, 0, len(arr))
print(f"Binary search index for 7 : {search_idx}")


Output is: Binary search index for 7 : 3

2. Prefix search

If your list had all strings (lower case) and in sorted order then we can use bisect for prefix search as well as follows:
import bisect
def prefix_search(arr, prefix):
    idx = bisect.bisect_left(arr, prefix)
    if idx >= len(arr):
        return None
    el = arr[idx]
    return el if el.startswith(prefix) else None


arr = ["apple", "cat", "dog","elephant"]
print(prefix_search(arr, "ap"))

Output is: apple

3. Find no of repeating values 

If you have a sorted array and you want to find the number of times a number is repeated then we can use bisect again. Note that since the array is sorted the number will be sequential.
import bisect
def count_repeated_no(arr, no):
    left_idx = bisect.bisect_left(arr, no)
    right_idx = bisect.bisect_right(arr, no)

    if left_idx >= len(arr):
        return -1
    else:
        return right_idx - left_idx

arr = [1, 2, 5, 5, 5, 5, 9, 1]
print(f"Count of 5 in array {count_repeated_no(arr, 5)}")

Output: Count of 5 in array 4

Performance

Note that this API is really fast. 
  1. One because it uses binary search so it's time complexity is O(LogN)
  2. Secondly, it's precompiled in C so it's faster than if you implement it yourself (See SO question on comparison with direct lookup in list)

Related Links

Wednesday, 26 March 2025

SOLID principles in Object Oriented Programming

 Background

This post will try to see the 5 SOLID design principles used for object-oriented software design. These principles are
  • S: Single Responsibility Principle (SRP)
  • O: Open/Closed Principle
  • L: Liskov’s Substitution Principle (LSP)
  • I: Interface Segregation Principle (ISP)
  • D: Dependency Inversion Principle (DIP)
Whenever you write new code or design new software, ensure you respect the SOLID principles mentioned above. This helps create clean and modular code that will help you maintain and scale it in the future. Let's see each of these in detail.




Single Responsibility Principle (SRP)

This principle states that - "A class should have single responsibility". 

Let us take an example to understand this better. Let's assume you are building software for a restaurant and you create a class classed Worker which has method like prepare_food & serve_food.
class Worker:
    def prepare_food(self):
        print("Preparing Food")

    def serve_food(self):
        print("Serving Food")  


What do you think about the above design? It's bad but why? It's bad because it's very tightly coupled with 2 responsibilities - one is preparing food and one is serving food which is independent tasks. Now let's say you make any changes to how you prepare food in the future you also need to test and ensure that serve_food functionality is not impacted. This unnecessary coupled dependency has made code hard to maintain. A better design would be to have separate classes for each
  • Class Chef - For Preparing Food
  • Class Waiter - For serving Food
That way there is separation of concern, and each class has logic for the business use case it is responsible for. It's easy to maintain & scale.

Open/Closed Principle

This principle states that - "A class should have open for extension but closed for modification". 

Let's see an example of this as well. Let's again take our above example of writing software for restaurant business. Let's say we are implementing a Chef class below.
class Chef:
    def prepare_chinese_food(self):
        print("Preparing Chinese Food")  




Do you see any potential problems with the above? What happens when your restaurant starts preparing continental cuisine. You will add another method called prepare_continental_food to our Chef class. This will increase the number of methods in the class as we will have to add one each time we support a new cuisine. This is bad because you are modifying a code that is already tested and functioning which means adding Continental cuisine support has put the original implementation to support Chinese at risk. 

To fix this a better way would be to extend the Chef class to support new functionality
class Chef:
    def prepare_food(self):
        print("Preparing Chinese Food")


class ContinentalChef(Chef):
    def prepare_food(self):
        print("Preparing Continental Food")


chef = ContinentalChef()
chef.prepare_food()


As you can see now you can extend the base class Chef and add your new functionality without impacting the original implementation. If for any additional feature in code if you have to make lot of modification to your original code it is definitely violating this principle, so you should step back and see how you can make your code generic so that very less code changes are needed for future enhancements.

Liskov’s Substitution Principle (LSP)

This principle states - "Derived or child classes must be substitutable for their base or parent classes",

For this, we can again use the above example. What would have happened if we had implemented our Chef class a little differently?
  
class Chef:
    def prepare_chines_food(self):
        print("Preparing Chinese Food")


class ContinentalChef(Chef):
    def prepare_chines_food(self):
        print("Preparing Continental Food")

    def prepare_continental_food(self):
        print("Preparing Continental Food")


chef = ContinentalChef()
chef.prepare_chines_food()


This is also bad because you have overridden the prepare_chinese_food functionality to actually prepare continental food. The caller of the Chef class will call prepare_chines_food under the impression that Chinese food will be prepared but due to the way the above classes are implemented it will do the unexpected thing (which is preparing continental cuisine).


The principle says you should be able to use ContinentalChef anywhere you have used Chef class without breaking the software contract. If you really intended ContinentalChef to not prepare the Chinese food you can raise an Exception from the prepare_chinese_food method of child class.

Interface Segregation Principle (ISP)

This principle states - "Do not force any client to implement an interface which is irrelevant to them".

This principle is very similar to Single responsibility principle we saw above except it tasks about interfaces rather than classes.

For example, let's say you have Chef class and Cusine Interface with methods like create_chinese, create_continental, etc. Instead of having a single interface definition for all abstract method, it's good to have separate interfaces - ChineseCuisine, ContinentalCuisine, etc. That way your concrete classes can choose to implement the ones they think are correct for their use cases.

In real work a chef can know both cuisines or a single cuisine. By creating a single interface you are forcing the concrete class to implement both even if they are not supporting bot cuisines which is bad. Concrete class should be free to choose what interface to implement based on the cuisine the chef can prepare.

Dependency Inversion Principle (DIP)

This principle states - "High-level modules should not depend on low-level modules. Both should depend on abstractions"


This one is actually my favorite. Let's see an example to understand this. Let's say you have a Restaurant class and it has a ChinesePrepatingChef as an instance member. Tomorrow if the chef changes to ContinentalPreparingChef then you need to modify the Restaurant code to use the new class. 

Instead of this you should define a base class called Chef and create two concrete classes - ChinesePreparingChef & ContinentalPreparingChef. In Restaurant you will just have a reference to Chef instance which can be either of the concrete classes depending on the use case.




Friday, 7 February 2025

Handling missing values in DataFrame with Pandas

 Background

In the last few posts, we have been seeing the basics of Pandas - Series, DataFrame , how to manipulate data etc. In this post, we will try to see how to handle missing values in DataFrame.



Handling missing values in DataFrame with Pandas

Padas accept all following values as missing data
  • np.nan
  • pd.NA
  • None
We can use the isna or notna function to detect these missing data. 
  • The isna function evaluates each cell in a DataFrame and returns True to indicate a missing value. 
  • The notna function evaluates each cell in a DataFrame and returns True to indicate a non-missing value. 
Let's try to see an example for the above:

Code:
import pandas as pd
import numpy as np

df = pd.DataFrame({"Name": ["Aniket", "abhijit", pd.NA, "Anvi"],
                   "Role": ["IT Dev", None, "IT QA", np.nan],
                   "Joining Date": [20190101, 20200202, 20210303, 20220404]})

print(df.to_string())
print(df.isna())
print(df.notna())


Output:
      Name    Role  Joining Date
0   Aniket  IT Dev      20190101
1  abhijit    None      20200202
2     <NA>   IT QA      20210303
3     Anvi     NaN      20220404

    Name   Role  Joining Date
0  False  False         False
1  False   True         False
2   True  False         False
3  False   True         False

    Name   Role  Joining Date
0   True   True          True
1   True  False          True
2  False   True          True
3   True  False          True

You can now use the truth data to filter rows as follows

import pandas as pd
import numpy as np

df = pd.DataFrame({"Name": ["Aniket", "abhijit", pd.NA, "Anvi"],
                   "Role": ["IT Dev", None, "IT QA", np.nan],
                   "Joining Date": [20190101, 20200202, 20210303, 20220404]})

print(df[df["Role"].notna()])

Output:

     Name    Role  Joining Date
0  Aniket  IT Dev      20190101
2    <NA>   IT QA      20210303


Dropping (dropna)& replacement (fillna)of missing data

  • dropna : The dropna function is used to drop rows and columns with missing values. It takes following arguements
    • axis - 0 for rows and 1 for columns
    • how - any for dropping if any data point is missing , all for dropping if all data points are missing
    • thresh - Threshold number of data points missing for dropping
    • inplace - Instead of returning a new modified DataFrame does dropping inplace for same DataFrame.
  • fillna: The fillnafunction is used to fill missing values with some data.

Example for dropna:
import pandas as pd
import numpy as np

df = pd.DataFrame({"Name": ["Aniket", "abhijit", pd.NA, "Anvi"],
                   "Role": ["IT Dev", None, "IT QA", np.nan],
                   "Joining Date": [20190101, 20200202, 20210303, 20220404]})

# Drop all columns with missing any data
print(df.dropna(axis=1, how="any"))

# Drop all rows with missing any data
print(df.dropna(axis=0, how="any"))

Output:
   Joining Date
0      20190101
1      20200202
2      20210303
3      20220404

     Name    Role  Joining Date
0  Aniket  IT Dev      20190101

Example for fillna:
import pandas as pd
import numpy as np

df = pd.DataFrame({"Name": ["Aniket", "abhijit", pd.NA, "Anvi"],
                   "Role": ["IT Dev", None, "IT QA", np.nan],
                   "Joining Date": [20190101, 20200202, np.nan, 20220404]})

# Replace missing values with some default value
print(df.fillna({"Name": "Default Name", "Role": "Default Role"}))

# Replace missing joining date with latest/max joining date in data
print(df["Joining Date"].fillna(value=df["Joining Date"].max()))

# forward fill the missing data
print(df.ffill(limit=1))


Output:

           Name          Role  Joining Date
0        Aniket        IT Dev    20190101.0
1       abhijit  Default Role    20200202.0
2  Default Name         IT QA           NaN
3          Anvi  Default Role    20220404.0

0    20190101.0
1    20200202.0
2    20220404.0
3    20220404.0
Name: Joining Date, dtype: float64

      Name    Role  Joining Date
0   Aniket  IT Dev    20190101.0
1  abhijit  IT Dev    20200202.0
2  abhijit   IT QA    20200202.0
3     Anvi   IT QA    20220404.0

NOTE: Previously we could pass method argument as bfill as ffill but that is deprecated now. Worning message: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead

Thursday, 6 February 2025

String and Date manipulation in DataFrame with Pandas

Background

In the last post, we saw how to filter data in a DataFrame provided by the panda's library in Python. In this post, we will see how we can manipulate string and date type columns in DataFrame.



String and Date manipulation in DataFrame with Pandas

String & date manipulation in 

String manipulation

You can use df.dtypes to check the data types of columns in a DataFrame. You can also use df.astype to convert the column types. In this section, we will see how to manipulate and operate string data types in DataFrame. You can do this by accessing the string content using .str   accessor type.

Code:

import pandas as pd

df = pd.DataFrame({"Name": ["Aniket", "abhijit", "awantika", "Anvi"],
                   "Role": ["IT Dev", "Finance Analyst", "IT QA", "Fun Play"],
                   "Joining Date": [20190101, 20200101, 20210101, 20220101]})

print(df)

# .str accessor to operate on string data type
df["Name_initials"] = df["Name"].str[0:3]
df["Department"] = df["Role"].str.split(" ", expand=True)[1]

# Use + operator to concatenate data
df["Name_Department_Combined"] = df["Name"] + "_" + df["Department"]

# Chain operations to get results in one line
df["Capitalized_Initials"] = df["Name"].str.capitalize().str[0:3]

print(df.to_string())

Output:

       Name             Role  Joining Date

0    Aniket           IT Dev      20190101
1   abhijit  Finance Analyst      20200101
2  awantika            IT QA      20210101
3      Anvi         Fun Play      20220101

Name            object
Role            object
Joining Date     int64
dtype: object

       Name             Role  Joining Date Name_initials Department Name_Department_Combined Capitalized_Initials

0    Aniket           IT Dev      20190101           Ani        Dev               Aniket_Dev                  Ani
1   abhijit  Finance Analyst      20200101           abh    Analyst          abhijit_Analyst                  Abh
2  awantika            IT QA      20210101           awa         QA              awantika_QA                  Awa
3      Anvi         Fun Play      20220101           Anv       Play                Anvi_Play                  Anv

In the above example, you can see various ways you can manipulate the str column type on data frame.

Date manipulation

Similar to manipulating string we can also manipulate date data types in pandas using accessory "dt". We will use the same example as above for date manipulation. For this we use a different data type called "datetime64[ns]" , this data type represents a timestamp meaning it represents a date and time.

You can try executing below code to see how this data type works

  • print(pd.to_datetime("20240701"))
This outputs
2024-07-01 00:00:00


In the example above we have a "Joining date" & as you see in the outout of code above it currently prints int as dtype of that column. We need to convert it to a datetime type before we do further manipulations on a date. As I mentioned above we can convert the data type using df.astype() method.

Code:

import pandas as pd

df = pd.DataFrame({"Name": ["Aniket", "abhijit", "awantika", "Anvi"],
                   "Role": ["IT Dev", "Finance Analyst", "IT QA", "Fun Play"],
                   "Joining Date": [20190101, 20200202, 20210303, 20220404]})

df['Joining Date'] = pd.to_datetime(df['Joining Date'], format='%Y%m%d')

# If date was of standard YYYY-MM-DD format you could use velow
# df = df.astype({"Joining Date": "datetime64[ns]"})
print(df.dtypes)

df["Joining Year"] = df["Joining Date"].dt.year
df["Joining Month"] = df["Joining Date"].dt.month
df["Joining Day"] = df["Joining Date"].dt.day
print(df.to_string())

Output:

Name                    object
Role                    object
Joining Date    datetime64[ns]
dtype: object

       Name             Role           Joining Date       Joining Year  Joining Month  Joining Day
0    Aniket           IT Dev        2019-01-01           2019              1            1
1   abhijit  Finance Analyst   2020-02-02          2020              2            2
2  awantika            IT QA     2021-03-03           2021              3            3
3      Anvi         Fun Play       2022-04-04           2022              4            4

You can do more operations using the ".dt" accessor for date data types.

Related Links

Sunday, 2 February 2025

Filtering a DataFrame in Pandas

 Background

In the last post, we saw the basics if using the pandas library in Python which is used for data analysis. We saw two basic data structures supported by pandas

  1. Series
  2. DataFrame
In this post, we will further see how we can filter data in a data frame. These are some of the most common operations performed for data analysis.



Filtering a data frame in Pandas

loc & iloc methods

To recap, a data frame is a two-dimensional data structure consisting of rows and columns. So we need a way to filter rows and columns efficiently. Two main methods exposed by data frame for this are

  1. loc - uses rows and column labels
  2. iloc - uses rows and column indexes

For example:

import pandas as pd
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10, (4, 4)), index=["a", "b", "c", "d"],
                  columns=["colA", "colB", "colC", "colD"])
print(df.loc[["a", "b"], ["colA", "colC"]])
print(df.iloc[:2, :3])

Output:

   colA  colC

a     4     7

b     1     4

   colA  colB  colC

a     4     9     7

b     1     1     4

The loc and iloc methods are frequently used for selecting or extracting a part of a data frame. The main difference is that loc works with labels whereas iloc works with indices.

Selecting subset of columns

We can get a Series (a single column data) from the data frame using df["column_name"], similarly, we can get a new data frame with a subset of columns by passing a list of columns needed. For eg.,

import pandas as pd
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10, (4, 4)), index=["a", "b", "c", "d"],
                  columns=["colA", "colB", "colC", "colD"])
print(df[["colA", "colC"]])
print(type(df[["colA", "colC"]]))


Output:

   colA  colC

a     5     2

b     7     2

c     4     3

d     9     1

<class 'pandas.core.frame.DataFrame'>

As you can see from the output we selected 2 columns - ColA and ColC and the result is a new DataFrame object.

Filtering by condition

You can also filter a data frame by conditions. Consider the following example:

import pandas as pd
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10, (4, 4)), index=["a", "b", "c", "d"],
                  columns=["colA", "colB", "colC", "colD"])
print(df[df["colA"] >= 1])


Output:

   colA  colB  colC  colD

a     3     9     5     6

b     8     5     9     6

c     9     4     1     4

d     8     4     3     5

The data frame has randomly generated data so the output will not be consistent but you can confirm that output will always have entries corresponding to colA having values greater than or equal to 1 as we specified in the filtering condition.

you can also specify multiple conditions with & or | operators. Consider the following example
import pandas as pd
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10, (4, 4)), index=["a", "b", "c", "d"],
                  columns=["colA", "colB", "colC", "colD"])
print(df[(df["colA"] >= 1) | (df["colB"] <= 5)])



Output:
   colA  colB  colC  colD
a     2     4     4     7
b     4     4     3     6
c     1     9     1     9
d     2     3     8     3

Again the output would not be consistent due to randomness of data but you should get the output that matches the filtering conditions. Following are all conditions supported
  • ==: equal
  • !=: not equal
  • >: greater than
  • >=: greater than or equal to
  • <: less than
  • <=: less than or equal to
You can also use the .isin method to filter data as follows.
import pandas as pd
import numpy as np

df = pd.DataFrame(np.random.randint(1, 10, (4, 4)), index=["a", "b", "c", "d"],
                  columns=["colA", "colB", "colC", "colD"])
print(df[df["colA"].isin([1, 2, 3])])

Output:
   colA  colB  colC  colD
b     1     7     6     4
c     2     4     9     7


Related Links

Getting started with Pandas library in Python

 Background

If you have worked with data analysis or data sciences roles you would have worked with the Pandas/numpy libraries in Python which comes in handy. In this post, we will see how to get started on working with the pandas library. This post assumes you have a basic understanding of Python



Installing pandas library

You can install the pandas library using pip

Or you can directly install it in your pycharm as a package. You can go to
Setting -> Project ->Python interpreter and click on "+" icon to search & add the pandas package.



Once you have installed pandas library you can import it as 
  • import pandas as pd
and start using it

Data structures supported in Pandas

Pandas library supports 2 main data structures

  1. Series: One dimensional array with an axis.
  2. DataFrame: Two dimensional data structure with labelled rows and columns

Series

Let's try to see how Series works. You can simply create a series from a python array
import pandas as pd

test_series = pd.Series([11, 12, 13])
print(test_series)

Output is:
0    11
1    12
2    13
dtype: int64

As you can see from the output we have integer index (0,1,2) and 1-dimensional data with values [11,12,13]. You can also have a string index for this one-dimensional array, You can use this index to access the data from Series.
import pandas as pd

test_series = pd.Series([11, 12, 13], index=["RowA", "RowB", "RowC"])
print(test_series)
print(test_series["RowA"])

Output is:
RowA    11
RowB    12
RowC    13
dtype: int64
11

DataFrame

To create a data frame you can simply pass a dictionary where the key of the dictionary forms the columns and the actual values of that keys from the data.
import pandas as pd

df = pd.DataFrame({
    "ColA": [11, 12, 13],
    "Col B": [21, 22, 23],
    "Col C": [31, 32, 33],
}, index=["Row A", "Row B", "Row C"])

print(df)

Output is:
           ColA  Col B  Col C
Row A    11     21     31
Row B    12     22     32
Row C    13     23     33


As with series, the passing index is optional, if you do not pass default behavior is to use integer indexes (0,1,2... etc.). Similarly if you do not assign explicit column names then integer columns are used.

DataFrame support various methods
  • df.head(): Gives first 5 rows. 
  • df.size: Gives number of cells in data frame (no of rows * no of columns). For above example output will be 9.
  • df.shape:  Gives dimension of data frame. For above example output will be (3,3)
  • len(df): Give number of rows in data frame. For above example output will be 3.

For checking the data type and converting the column type we can use below methods:
  • df.dtypes : Give data types of columns present in the data frame
  • df.astype: Mehtod to convert data type of a column
Consider the following example:
import pandas as pd

df = pd.DataFrame({
    "ColA": [11, 12, 13],
    "Col B": [21, 22, 23],
    "Col C": [31, 32, 33],
}, index=["Row A", "Row B", "Row C"])

print(df.dtypes)
df = df.astype({"ColA": float})
print(df.dtypes)

Output:
ColA     int64
Col B    int64
Col C    int64
dtype: object
ColA     float64
Col B      int64
Col C      int64
dtype: object

Once you have a dateframe in place you can reference the individual columns and perform analysis on it. A single column referenced from data frame can perform below operations:
  • df.col.nunique(): Returns number of unique elements
  • df.col.uniqie():  Return actual unique elements
  •  df.col.mean(): Retuns mean of column values
  • df.col.median(): Returns median of column values
  • df.col.value_counts(): Return unique values and their counts
Consider below example:
import pandas as pd

df = pd.DataFrame({
    "Col A": [11, 12, 13],
    "Col B": [21, 22, 23],
    "Col C": [31, 32, 33],
}, index=["Row A", "Row B", "Row C"])

print(df["Col A"].nunique())
print(df["Col A"].unique())
print(df["Col A"].mean())
print(df["Col A"].median())
print(df["Col A"].value_counts())

Output is:
3
[11 12 13]
12.0
12.0
Col A
11    1
12    1
13    1
Name: count, dtype: int64


Note that column of data frame is actually a series
df = pd.DataFrame({"A":[1,2,3], "C": [1,2,3]})
print(type(df))
print(type(df.A))

Output:
<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.series.Series'>

Related links

Saturday, 25 January 2025

Using 'Union find' pattern for problem solving questions

 About the pattern

The pattern is used to group elements into sets based on specified properties. Each element is unique and the sets are non-overlapping meaning each set has distinct elements. Each set forms a tree data structure, each element has a parent and the root of the tree is called the representative element of that set. The parent of this representative node is the same node (itself). If we pick any element of a set and follow its parent node then we will always reach the representative element of the set (root of the tree representing the disjoint set).

The pattern is implemented using two methods
  1. find(node): Find the representative of the set containing the node.
  2. union(node1, node2): Merges the set containing node1 and node 2 into one.

Let's say we have an array representing numbers 0 to 5. Before we start applying this pattern each element has itself as the parent.

This means we have 6 different unique sets each containing one element. Now we want to start grouping them into a unique set (typically we will have more than one unique set which we will see with an actual example in some time but for this example consider  we want to create a single set).


  1. Do union(0, 1): This will merge nodes 0 and 1 together. Change the representative of 0 to 1.


  2. Similarly do
    1. union(1,2)
    2. union(2,3)
    3. union(4,5): Notice how we merged 4 and 5 instead of 3 and 4. This is done just to give you an idea that an element can merge into any disjoint sets based on the property under consideration.
    4. union(3,5)
At the end, we have a tree below:


As you would have imagined by now as you are making this disjoint set the size of the tree can be O(N) in the worst case and so is the time complexity of this pattern.

Optimization: We can optimize this further by maintaining the rank of each element which would denote the number of the child node beneath it. So the rank of the representative node will denote the size of the tree it represents (the number of child nodes it has). We can use this rank to then decide which of the two representative nodes should we use as the parent new representative node while merging the two trees corresponding to the two disjoint sets/trees. Eg., above node 5 at the end of iteration has the rank of 6 (5 nodes under it  + 1 counting itself).

With the above optimization, you will now select a new representative node as the representative node with a higher rank among the two getting merged. So it will be something like below:




As you would have guessed by now the tree is balanced with the above approach and guarantees that the TC for search is reduced to log(N) - length of the tree.

So if I have to find the representative of 4, I will find its parent which is 5, and then recursively find its parent till we reach root which in this case is 1. Once the root is reached (node with itself as a parent) we return that node as it is the representative node. As we traverse the length of the tree the TC is log(N).

Using 'Union find' pattern

Now let's try to use this pattern to solve an actual problem-solving question

Problem statement:
For a given integer, n, and an array, edges, return the number of connected components in a graph containing n nodes. 
The array edges[i] = [x, y] indicates that there’s an edge between x and y in the graph.

Constraint:
  • 1<=n<=1000
  • 0<=edges.length<=500
  • edges[I].length == 2
  • 0<=x,y<n
  • x!=y
  • No repeated edges
Notice how the problem statement says that the elements (vertices of the graph) are unique, and there are no repeated edges. Let's see how we can implement this using the union-find method.

Solution:
class UnionFind:

    def __init__(self, n):
        self.parent = []
        for i in range(n):
            self.parent.append(i)
        self.rank = [1] * n
        self.connected_components = n

    def find(self, v):
        if self.parent[v] != v:
            return self.find(self.parent[v])
        return v
   

    def union(self, x, y):
        p1, p2 = self.find(x), self.find(y)
        if p1 != p2:
          if self.rank[p1] < self.rank[p2]:
            self.parent[p1] = p2
          else:
            self.parent[p2] = p1
          self.connected_components = self.connected_components - 1
		
def count_components(n, edges):
  uf = UnionFind(n)
  for edge in edges:
    v1 = edge[0]
    v2 = edge[1]
    uf.union(v1, v2)  
  return uf.connected_components


If we run the above for the following data sets it gives the correct results:
Example 1:
  • Input: 5 , [[0,1],[1,2],[3,4]] 
  • Output: 2
Example 2:
  • Input: 6 , [[0,1],[3,4],[4,5]]
  • Output: 3
Now let's try to understand the solution and implementation of the pattern. The core logic of the union pattern is in the UnionFind class. In the constructor, we have initialized 3 things
  1. parent - list that tracks the parent of each element
  2. rank - list that tracks the rank of each element
  3. connected_component - number of connected component

At the start each node has itself assigned as the parent and the rank of each node is 1. Similarly, since each node is separate at the start number of connected components is equal to the number of nodes.

As we iterate over the edges we pass them to the union method to merge them into the single set - remember the pattern is used to split unique elements into disjoint sets (connect component in this case). As we merge the vertices which are supposed to be part of edges we reduce the connected_component by 1 since two elements forming an edge are merged in a single set (representing connected component). At the end when we have parsed each edge we would have completed having unique disjoint sets representing the number of connected components in edges, so we simply return connected_componenets which we have been using to track it.

Also, notice how we are using rank to ensure that while emerging two disjoint sets we set the parent as the one with a higher rank to ensure the resultant tree is balanced and consequently find() operations take log(N).


Related links






Thursday, 16 January 2025

How to get "i" th bit of a number in Python

 Background

In various problem-solving techniques, we often need to find the i'th bit of a number. For eg., lets say we have the number 5 , it's binary notation is 101. If I want to get 0th bit it is 1, 1st bit is 0 and 2nd bit is again 1. In this post, we will see how to get the ith bit for a given number in Python.


Use of getting i'th bit

Before we see how to get ith bit let's try to understand a problem-solving use case where we can use this logic. Consider the following problem statement:

Given an array of integers, "data" , find all possible subsets of data, including the empty set.

The question is basically asking us to find all possible subsets of a given array of integers.
For e.g, consider input as [2, 5, 7] the possible combinations are
  • i=0: {}
  • i=1: {2}
  • i=2: {5}
  • i=3: {2,5}
  • i=4: {7}
  • i=5: {2,7}
  • i=6: {5,7}
  • i=7: {2,5,7}
The total combinations possible for a given list is 2^N where N is the length of the array. In the above case since N=3 (length of input array) the expected combinations are 2^3 = 8 which is exactly what you see above.

Now to compute these we iterate from i=0 to 2^N-1 which in this case will be from 0 to 7.
For each of these numbers, we can find the binary notation and then check if the binary bit is 1 for a given index and include it in the combination.

For eg., let's take i=3, binary notation is 110 which means we should consider numbers from 1st and 2nd position of the original array. Since the original array was [2,5,7] and we are considering 1st and 2nd position arrays the subset is {2,5} which is what you see in above combination list as well.

For all "i" values subsets computed are shown below:



Now let's see the Python code to get the ith bit for a number.

NOTE: In python for computing 2^N we do 2 ** N, In Java you would do Math.pow(2, N).

How to get "i" th bit of a number in Python

We will test the same example we took above when num =3 and we want to get all bits of it to decide which position data from the original array to include in the current subset. Python code for the above is as follows:
def get_ith_bit(num, i):
    # Shift the operand specified number of bits to the left
    temp = (1 << i)
    temp = temp & num
    if temp == 0:
        return 0
    return 1

print(get_ith_bit(3, 0))
print(get_ith_bit(3, 1))
print(get_ith_bit(3, 2))

Output is:
1
1
0
which is 110 in binary as we saw before as well.

Complete code

The complete code for the problem statement is as follows:
PS: Given an array of integers, "data" , find all possible subsets of data, including the empty set.
def get_ith_bit(num, i):
    # Shift the operand specified number of bits to the left
    temp = (1 << i)
    temp = temp & num
    if temp == 0:
        return 0
    return 1

def find_all_subsets(data):
    subsets = []

    if not data:
        return [[]]
    else:
        combinations = 2 ** len(data)
        for i in range(0, combinations):
            subset = set()
            for j in range(0, len(data)):
                if get_ith_bit(i, j) == 1 and data[j] not in subset:
                    subset.add(data[j])

            if i == 0:
                subsets.append([])
            else:
                subsets.append(list(subset))
    return subsets

print(find_all_subsets([2,5,7]))
Output:
[[], [2], [5], [2, 5], [7], [2, 7], [5, 7], [2, 5, 7]]


Related links

Sunday, 12 January 2025

Working with heaps in Python

 Background

As we all know a heap is a complete binary tree that satisfies the heap property:

  • Min heap: The value of children is greater than or equal to the parent node. This means the root node of a min heap stores the lowest value or minimum value data.
  • Max heap: The value of children is smaller than or equal to the parent. This means that the root node of a max heap stores the highest value or the maximum value data.

Heaps are used to implement priority queues for various use cases. See the below diagrams on what constitutes a valid and invalid heap (Credit: GeeksForGeeks)

Valid min heaps



Invalid min heaps


Valid max heaps


Invalid max heaps


It is an interesting read on how to insert in a heap, how to remove from the heap, how to heapify an array, how to use an array to represent a heap (The above diagrams show binary tree representation), how to use heaps for implementing priority queue etc. 

The time complexity for various heap operations is as follows:



Working with heaps in Python

Now that we brushed upon heap data structure above let's get to the original purpose of this post, which s to understand how to work with heaps in Python.

In Java, you would usually use PriorityQueue implementation to work with heaps. In Python we use python’s inbuilt library named heapq.

Look at the following code:

import heapq

data = [3, 5, 9, 14, 4, 24, 2]
print(f"Original data: {data}")
heapq.heapify(data) # Create min heap
print(f"After heapify: {data}")
heapq.heappush(data, 1)
print(f"After pushing to heap: {data}")
heapq.heappop(data)
print(f"After poping from heap: {data}")

It Prints:

Original data: [3, 5, 9, 14, 4, 24, 2]
After heapify: [2, 4, 3, 14, 5, 24, 9]
After pushing to heap: [1, 2, 3, 4, 5, 24, 9, 14]|
After poping from heap: [2, 4, 3, 14, 5, 24, 9]

As you can see we initially has a simple list called data which we then converted into a min heap (default heap behavior) using heapify method (O(N) time complexity). We then pushed an element to the min heap (O(Log(N)) time complexity) and then poped one which remoes the element from the root - minimum element in this case of min heap (Also has time complexity O(Log(N)))

NOTE: If you want max heap implementation you can just negate the data and push it in the heap.

NOTE: Heap elements can be tuples. This is useful for assigning comparison values (such as task priorities) alongside the main record being tracked. 1st element in the tuple is use for the comparison/priority.

See below code of how we can add tuple in the heap:

import heapq

data = []
heapq.heappush(data, (3, "China"))
heapq.heappush(data, (1, "India"))
heapq.heappush(data, (2, "USA"))
print(data)

It prints:

[(1, 'India'), (3, 'China'), (2, 'USA')]


PriorityQueue class

We can also use PriorityQueue class implementation directly in Python. This internally uses heapq library but is different in following ways

  1. It's synchronized, so it supports concurrent processes. 
  2. It's a class interface as opposed to the function-based interface of heapq.

You can see following example on how to use PriorityQueue class in python

from queue import PriorityQueue

pQueue = PriorityQueue()
pQueue.put((3, "China"))
pQueue.put((1, "India"))
pQueue.put((2, "USA"))

while not pQueue.empty():
    print(pQueue.get())

Above also prints:

(1, 'India')
(2, 'USA')
(3, 'China')

Related Links

Saturday, 11 January 2025

Working with dictionaries in Python

 Background

In this post, we will see how we can use dictionaries and specifically how we can enable a custom class in Python to be used as a dict key. I am originally from a Java background and have written posts before on how HashMap/HashTable works in Java (See Related links section towards the end of this post). 

If someone is new
  • Dictionaries are data structures that store data of format key: value pairs
  • Data stored in dict is unique (does not allow duplicate keys), mutable (can edit), and ordered
NOTE: As of Python version 3.7, dictionaries are ordered. In Python 3.6 and earlier, dictionaries are unordered.

Dictionaries in Python

You can initialize a dictionary in Python using {} or dict keyword

  • my_dict = {}
  • my_dict_1 = {"Name": "Aniket", "Country": "India"}
  • my_dict_2 = dict(name="Aniket")
You can print these and see the dictionary. It will print
{}
{'Name': 'Aniket', 'Country': 'India'}
{'name': 'Aniket'}


A normal dict like above is unordered which means the order in which you insert data in dict is not maintained, when you iterate over items in dict it may give you a different order. This is the same as HashMap in Java. However, similar to LinkedHashMap we have OrderedDict in Python (Please note that as of Python version 3.7, dictionaries are ordered. In Python 3.6 and earlier, dictionaries are unordered).

You can also merge one map to another using the update method. Remember key's of a dict are unique and similar to HashMap if you try to insert a key that is already present in the dict then it is going to override the key data with the new value (See below code).
my_dict = {}
my_dict["name"] = "Aniket"
my_dict["Country"] = "India"
for key in my_dict:
    print(f"{key} : {my_dict[key]}")
print("--------------")
others_dict = {}
others_dict["name"] = "John"
others_dict["State"] = "NYC"
for key in others_dict:
    print(f"{key} : {others_dict[key]}")
print("--------------")
my_dict.update(others_dict)
for key in my_dict:
    print(f"{key} : {my_dict[key]}")


Output is:
name : Aniket
Country : India
--------------
name : John
State : NYC
--------------
name : John
Country : India
State : NYC


See how name key value was overridden by new dict key, Country key was not overridden so it stated the same and finally a new key called State was added.

Using a custom class as a key in dict

In Java for a new class, you would typically override equals and hashcode method to make it work, in Python we have to do something similar as well.

Let's create an Employee class with just name and age, try to create objects out of it, push to dict and print it.

class Employee:
    def __init__(self, name, age):
        self.name = name
        self.age = age

    def __str__(self):
        return f"Employee with name: {self.name} & age: {self.age}"


my_dict = {}
e1 = Employee("Aniket", 30)
e2 = Employee("Aniket", 30)
my_dict[e1] = 1
my_dict[e2] = 2
for key, value in my_dict.items():
    print(f"{key} : {value}")

Output is:
Employee with name: Aniket & age: 30 : 1
Employee with name: Aniket & age: 30 : 2


As you see from output it took the same key as different keys in dict and added separate entries. Since the employee (uniquely defined by name and age) is same we want just one entry in dict corresponding to a unique name and age. In this case it should have overridden value from 1 to 2 and just printed single entry with value 2. Let's see how we do that.


For the above to work similarly to Java (where we override equals and hashcode), in Python we override the two methods below
  • __eq__()
  • __hasg__()

So your code will now look like

class Employee:
    def __init__(self, name, age):
        self.name = name
        self.age = age

    def __str__(self):
        return f"Employee with name: {self.name} & age: {self.age}"

    def __eq__(self, other):
        return (self.name, self.age) == (other.name, other.age)

    def __hash__(self):
        return hash((self.name, self.age))

my_dict = {}
e1 = Employee("Aniket", 30)
e2 = Employee("Aniket", 30)
my_dict[e1] = 1
my_dict[e2] = 2
for key, value in my_dict.items():
    print(f"{key} : {value}")


This now prints:
Employee with name: Aniket & age: 30 : 2

 which is in line with our expectations.

See the below diagram for all methods supported by Python dict



Related Links