22 Comments
User's avatar
Anandaganesh Balakrishnan's avatar

Motivating article for both senior data engineers and those aspiring to be one!!

Expand full comment
Barnak Banerjee's avatar

Loved it.

Expand full comment
Juliana Macedo's avatar

Amazing text, the more I read the more interested in reading I felt. So thanks! It resonated with me, and was an encouraging text

Expand full comment
Binh Nguyen's avatar

Amazing post!

Expand full comment
Mindah's avatar

This is such a masterpiece. Its literally everything we need to hear in this DE journey. Thank you for sharing your knowledge with us!

Expand full comment
Claire Scanlon's avatar

This was amazing. One of the best reads.

Expand full comment
Naina Chaturvedi's avatar

++ Good Post, Also, start here 100+ Most Asked ML System Design Case Studies and LLM System Design

https://open.substack.com/pub/naina0405/p/bookmark-most-asked-ml-system-design?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Davis Pham's avatar

I have to subscribe to substack immediately just to send you my giant gratitude for you.

I love your Walmart advice, it's really realistics, practical and suitable for a second year student.

Expand full comment
Ana M.'s avatar

Love this article; your takeaways are also pertinent to design as a profession, but I do have one question: when looking at how the city (or walmart) works, how do you architect tradeoffs? One person's walk signal is another person's wait time. How is that calculated? Where do you draw the boundaries? These are questions I constantly ask myself when architecting systems. Wondering if you have any general rules you follow?

Expand full comment
Ananth Packkildurai's avatar

I often think about trade-offs through two lenses. Taking the same traffic signal analogy: one person’s green light is another’s red. The system does not eliminate waiting, but rather makes it predictable. The countdown timer makes the constraint visible and fair. Architecture works the same way; every trade-off should be explained in the system with a visible “timer,” so the system’s priorities are transparent.

The second lens is the predator–prey dynamic in nature. When one side flourishes unchecked, it eventually limits the other, and balance returns through oscillation. The health of the ecosystem depends on that rhythm, not stasis. In systems, every optimization — latency, cost, reliability — creates its counterforce, each constraining and enabling the other. I usually call this "Swinging the Pendulum," where we swing the pendulum towards cost optimization until the counterforce slows delivery, then swing back towards the delivery, hoping we can find the equilibrium.

In practice, that means watching where the system feels strain — queues lengthening, feedback loops flattening — and then adjusting timing or resources until both sides can move again.

So when I design, I try not to freeze equilibrium. I tune the lights and watch the cycles — keeping both rhythm and feedback alive. The goal isn’t perfect balance, but dynamic balance: tension that stays visible, fair, and self-correcting over time.

Expand full comment
Ana M.'s avatar

This is a GREAT set of answers, Ananth! The support plank that underlies all of them is trust in the system: that the light will change for you or the pendulum will swing back, that the architects have the agency to support that dynamic balance.

Issues occur when events occur that cause that trust to erode, either in the users' perception or in reality.

Expand full comment
Ananth Packkildurai's avatar

Indeed. You might be interested in the Wicked Problem theory [https://en.wikipedia.org/wiki/Wicked_problem]. It is one of the eye-openers in architectural design.

The first thing to check is whether the system design is an interaction between system-to-system or human-to-system. When humans are part of the loop, the problem space shifts from a complicated to a wicked problem. In a purely system-to-system design, trade-offs can be modeled, optimized, or tuned. But once humans enter the loop — as users, operators, or observers — their perceptions, incentives, and evolving behaviors feed back into the system in unpredictable ways. Each “solution” changes the problem itself.

Wicked problems are inherently hard, but when trust erodes, restoring control is not enough; restoring the system's explainability helps a lot. In a wicked problem, the fix is to reopen the feedback loop, make intent visible again, and let people see the system learning with them.

Expand full comment
Ana M.'s avatar

Yes! Of course, easier said than done. One person's "transparency" can be another person's "security vulnerability", unfortunately. I first learned the term "wicked problems" in Horst Rittel and Max Weber's 1973 public design paper, Dilemmas in a General Theory of Planning, as linked in this wikipedia article and have constantly referenced it as I lead HCI teams in public healthcare and digital experience. Thanks again for your article and this conversation.

Expand full comment
Emmanuel Ekukole's avatar

My goodness! The number 4 happens to be associated with order, management, process. Those 4 mentors are invaluable pillars ⚙️🌟🌟🌟

Expand full comment
Sridar's avatar

Inspiring!

Great read. Thanks for sharing your journey.

Expand full comment
Donald Craig Iuppa's avatar

Ananth, Excellent article, one of your best! I enjoyed reading it. Cheers, --Don

Expand full comment
nikita bogatirov's avatar

Wow! Thank you sir for this post

Expand full comment
Chloe Bergsma-Safar's avatar

Thank you so much for this post, Amanth! As someone who is still fairly early on in her data engineering career, I really appreciated your vulnerability and advice.

Expand full comment
Rebwar Bajallan's avatar

Amazing post!

Expand full comment
Madhan's avatar

Wonderful. Gives lot of confidence and clarity.

Expand full comment