Image by Author
# Introduction
Most data scientists learn pandas by reading tutorials and copying patterns that work.
That is fine for getting started, but it often results in beginners developing bad habits. The use of iterrows() loops, intermediate variable assignments, and repetitive merge() calls are some examples of code that is technically accurate but slower than necessary and more difficult to read than it should be.
The patterns below are not edge cases. They cover the most common daily operations in data science, such as filtering, transforming, joining, grouping, and computing conditional columns.
In each of them, there is a common approach and a better approach, and the distinction is typically one of awareness rather than complexity.
These six have the greatest impact: method chaining, the pipe() pattern, efficient joins and merges, groupby optimizations, vectorized conditional logic, and performance pitfalls.
# Method Chaining
Intermediate variables can make code feel more organized, but often just add noise. Method chaining lets you write a sequence of transformations as a single expression, which reads naturally and avoids naming objects that do not need unique identifiers.
Instead of this:
df1 = df[df[‘status’] == ‘active’]
df2 = df1.dropna(subset=[‘revenue’])
df3 = df2.assign(revenue_k=df2[‘revenue’] / 1000)
result = df3.sort_values(‘revenue_k’, ascending=False)
You write this:
result = (
df
.query(“status == ‘active'”)
.dropna(subset=[‘revenue’])
.assign(revenue_k=lambda x: x[‘revenue’] / 1000)
.sort_values(‘revenue_k’, ascending=False)
)
The lambda in assign() is important here.
When chaining, the current state of the DataFrame cannot be accessed by name; you have to use a lambda to refer to it. The most frequent cause of chains breaking is forgetting this, which typically results in a NameError or a stale reference to a variable that was defined earlier in the script.
One other mistake worth knowing is the use of inplace=True inside a chain. Methods with inplace=True return None, which breaks the chain immediately. In-place operations should be avoided when writing chained code, as they offer no memory advantage and make the code harder to follow.
# The Pipe() Pattern
When one of your transformations is sufficiently complex to deserve its own separate function, using pipe() allows you to maintain it inside the chain.
pipe() passes the DataFrame as the first argument to any callable:
def normalize_columns(df, cols):
df[cols] = (df[cols] – df[cols].mean()) / df[cols].std()
return df
result = (
df
.query(“status == ‘active'”)
.pipe(normalize_columns, cols=[‘revenue’, ‘sessions’])
.sort_values(‘revenue’, ascending=False)
)
This keeps complex transformation logic within a named, testable function while preserving the chain. Each piped function can be individually tested, which is something that becomes challenging when the logic is hidden inline within an extensive chain.
The practical value of pipe() extends beyond appearance. Dividing a processing pipeline into labeled functions and linking them with pipe() allows the code to self-document. Anyone reading the sequence can understand each step from the function name without needing to parse the implementation.
It also makes it easy to swap out or skip steps during debugging: if you comment out one pipe() call, the rest of the chain will still run smoothly.
# Efficient Joins And Merges
One of the most commonly misused functions in pandas is merge(). The two mistakes we see most often are many-to-many joins and silent row inflation.
If both dataframes have duplicate values in the join key, merge() performs a cartesian product of those rows. For example, if the join key is not unique on at least one side, a 500-row “users” table joining to an “events” table can result in millions of rows.
This does not raise an error; it just produces a DataFrame that appears correct but is larger than expected until you examine its shape.
The fix is the validate parameter:
df.merge(other, on=’user_id’, validate=”many_to_one”)
This raises a MergeError immediately if the many-to-one assumption is violated. Use “one_to_one”, “one_to_many”, or “many_to_one” depending on what you expect from the join.
The indicator=True parameter is equally useful for debugging:
result = df.merge(other, on=’user_id’, how=’left’, indicator=True)
result[‘_merge’].value_counts()
This parameter adds a _merge column showing whether each row came from “left_only”, “right_only”, or “both”. It is the fastest way to catch rows that failed to join when you expected them to match.
In cases where both dataframes share an index, join() is quicker than merge() since it works directly on the index instead of searching through a specified column.
# Groupby Optimizations
When using a GroupBy, one underused method is transform(). The difference between agg() and transform() comes down to what shape you want back.
The agg() method returns one row per group. On the other hand, transform() returns the same shape as the original DataFrame, with each row filled with its group’s aggregated value. This makes it ideal for adding group-level statistics as new columns without requiring a subsequent merge. It is also faster than the manual aggregate and merge approach because pandas does not need to align two dataframes after the fact:
df[‘avg_revenue_by_segment’] = df.groupby(‘segment’)[‘revenue’].transform(‘mean’)
This directly adds the average revenue for each segment to each row. The same result with agg() would require computing the mean and then merging back on the segment key, using two steps instead of one.
For categorical groupby columns, always use observed=True:
df.groupby(‘segment’, observed=True)[‘revenue’].sum()
Without this argument, pandas computes results for every category defined in the column’s dtype, including combinations that do not appear in the actual data. On large dataframes with many categories, this results in empty groups and unnecessary computation.
# Vectorized Conditional Logic
Using apply() with a lambda function for each row is the least efficient way to calculate conditional values. It avoids the C-level operations that speed up pandas by running a Python function on each row independently.
For binary conditions, NumPy‘s np.where() is the direct replacement:
df[‘label’] = np.where(df[‘revenue’] > 1000, ‘high’, ‘low’)
For multiple conditions, np.select() handles them cleanly:
conditions = [
df[‘revenue’] > 10000,
df[‘revenue’] > 1000,
df[‘revenue’] > 100,
]
choices = [‘enterprise’, ‘mid-market’, ‘small’]
df[‘segment’] = np.select(conditions, choices, default=”micro”)
The np.select() function maps directly to an if/elif/else structure at vectorized speed by evaluating conditions in order and assigning the first matching option. This is usually 50 to 100 times faster than an equivalent apply() on a DataFrame with a million rows.
For numeric binning, conditional assignment is completely replaced by pd.cut() (equal-width bins) and pd.qcut() (quantile-based bins), which automatically return a categorical column without the need for NumPy. Pandas takes care of everything, including labeling and handling edge values, when you pass it the number of bins or the bin edges.
# Performance Pitfalls
Some common patterns slow down pandas code more than anything else.
For example, iterrows() iterates over DataFrame rows as (index, Series) pairs. It is an intuitive but slow approach. For a DataFrame with 100,000 rows, this function call can be 100 times slower than a vectorized equivalent.
The lack of efficiency comes from building a complete Series object for every row and executing Python code on it one at a time. Whenever you find yourself writing for _, row in df.iterrows(), stop and consider whether np.where(), np.select(), or a groupby operation can replace it. Most of the time, one of them can.
Using apply(axis=1) is faster than iterrows() but shares the same problem: executing at the Python level for each row. For every operation that can be represented using NumPy or pandas built-in functions, the built-in method is always faster.
Object dtype columns are also an easy-to-miss source of slowness. When pandas stores strings as object dtype, operations on those columns run in Python rather than C. For columns with low cardinality, such as status codes, region names, or categories, converting them to a categorical dtype can meaningfully speed up groupby and value_counts().
df[‘status’] = df[‘status’].astype(‘category’)
Finally, avoid chained assignment. Using df[df[‘revenue’] > 0][‘label’] = ‘positive’ could alter the initial DataFrame, depending on whether pandas generated a copy behind the scenes. The behavior is undefined. Utilize .loc alongside a boolean mask instead:
df.loc[df[‘revenue’] > 0, ‘label’] = ‘positive’
This is unambiguous and raises no SettingWithCopyWarning.
# Conclusion
These patterns distinguish code that works from code that works well: efficient enough to run on real data, readable enough to maintain, and structured in a way that makes testing easy.
Method chaining and pipe() address readability, while the join and groupby patterns address correctness and performance. Vectorized logic and the pitfall section address speed.
Most pandas code we review has at least two or three of these issues. They accumulate quietly — a slow loop here, an unvalidated merge there, or an object dtype column nobody noticed. None of them causes obvious failures, which is why they persist. Fixing them one at a time is a reasonable place to start.
Nate Rosidi is a data scientist and in product strategy. He’s also an adjunct professor teaching analytics, and is the founder of StrataScratch, a platform helping data scientists prepare for their interviews with real interview questions from top companies. Nate writes on the latest trends in the career market, gives interview advice, shares data science projects, and covers everything SQL.

