Summarizer

Performance Optimization Strategies

← Back to Lessons from 14 years at Google

Performance optimization is fundamentally a balancing act between technical efficiency and the nuanced expectations of the human experience. While reducing latency is a primary goal, developers must navigate the "labor illusion," where users may distrust results that appear too quickly, and the physical drawbacks of high-efficiency code, such as disruptive fan noise or overheating hardware. Many contributors advocate for a philosophy of strategic restraint, suggesting that the most impactful optimizations often involve removing unnecessary complexity or prioritizing stability over the constant addition of new features. Ultimately, the consensus highlights that true performance is defined not just by raw speed, but by how well a system integrates into the user's real-world environment and daily habits.

19 comments tagged with this topic

View on HN · Topics
> At scale, even your bugs have users. First place I worked right out of college had a big training seminar for new hires. One day we were told the story of how they’d improved load times from around 5min to 30seconds, this improvement was in the mid 90s. The negative responses from clients were instant. The load time improvements had destroyed their company culture. Instead of everyone coming into the office, turning on their computers, and spending the next 10min chatting and drinking coffee the software was ready before they’d even stood up from their desk! The moral of the story, and the quote, isn’t that you shouldn’t improve things. Instead it’s a reminder that the software you’re building doesn’t exist in a PRD or a test suite. It’s a system that people will interact with out there in the world. Habits with form, workarounds will be developed, bugs will be leaned for actual use cases. This makes it critically important that you, the software engineer, understand the purpose and real world usage of your software. Your job isn’t to complete tickets that fulfill a list of asks from your product manager. Your job is to build software that solves users problems.
View on HN · Topics
Worked on public transport ticketing (think rail gates and stuff) with contactless last 30 years, when guys would tell me that the software was "ready", I'd ask: > Is it "stand next to the gates at Central Station during peak time and everything works" ready? We were working on the project from a different city/country, but we managed to cycle our developers through the actual deployments so they got to see what they were building, made a hell of a difference to attitude and "polish". Plus they also got to learn "People travel on public transport to get somewhere, not to interact with the ticketing system." Meant that they understood the difference just 200ms can make to the passenger experience as well as the passenger management in the stations.
View on HN · Topics
I was curious what the commenter's business was, and found this post about HTTP protocol latency: https://jacquesmattheij.com/the-several-million-dollar-bug/
View on HN · Topics
I often chuckle when (our) animations may have more complex math that consume more resources than the awaited logic/call that they gate.
View on HN · Topics
This is a perfect example of a "bug" actually being a requirement. The travel industry faced a similar paradox known as the Labor Illusion: users didn't trust results that returned too quickly. Companies intentionally faked the "loading" phase because A/B tests showed that artificial latency increased conversion. The "inefficiency" was the only way to convince users the software was working hard. Millions of collective hours were spent staring at placebo progress bars until Google Flights finally leveraged their search-engine trust to shift the industry to instant results.
View on HN · Topics
So what is the correct solution to that specific problem then, adjust loading time per customer?
View on HN · Topics
Craziest I got was users complaining their laptops were getting too hot / too noisey because I correctly parallelized a task and it became too efficient . They liked the speed but hated the fans going on at full speed and the CPU (and hence the whole laptop) getting really warn (talking circa 2010). So I had to artificially slow down processing a bit as to not make the fans go brrrrr and CPU go too hot.
View on HN · Topics
If the fan was turning on where it wasn't before, it seems like cooling was once happening through natural dissipation, but after your fix it needed fans to cool faster. So the fix saved time but burnt extra electricity (and the peacefulness of a quiet room.) This is pretty easy to understand IMO. About 70% of the time I hear machine's fans speed up I silently wish the processing would have just been slower. This is especially true for very short bursts of activity.
View on HN · Topics
Obviously the proper solution is to adjust your system thermal management / power targets, but you can force programs to slow down yourself by changing the scheduling policy: chrt -i 0 <cmd>
View on HN · Topics
You probably wanted a low thread priority/QoS setting. The OS knows how to run threads such that they don't heat up the CPU. Well, on modern hardware it does anyway.
View on HN · Topics
I’d expect any os worth it’s name to run threads in a way that minimizes total energy not fan noise.
View on HN · Topics
People with desktop computers don't care about total energy, but they do care about fan noise for overnight maintenance tasks.
View on HN · Topics
You absolutely can remove unnecessary complexity. If your app makes an http request for every result row in a search, you'll simplify by getting them all in one shot. Learn what's happening a level or two lower, look carefully, and you'll find VAST unnecessary complexity in most modern software.
View on HN · Topics
May be because you are not familiar with Addy Osmani and his work. He is known for his very high quality performance optimisation work for web for almost a decade now. So anything he has read, edited and put his stamp of authority on is worth reading.
View on HN · Topics
I wish Google would be biased a little more towards quality and performance. Their user-facing products tend to be full of jank, although Gmail is quite good to be fair. In general I think the "ship fast and break things" mentality assumes a false dilemma, as if the alternative to shipping broken software is to not ship at all. If thats the mentality no wonder software sucks today. I'd rather teams shipped working, correct, and performant software even if it meant delaying additional features or shipping a constrained version of their vision. The minimalism of the software would probably end up being a net benefit instead of stuffing it full of half baked features anyways.
View on HN · Topics
I've used Meet a few times for video calls and I was amazed at how poorly it worked given the amount of resources Google has at their disposal. I've never had a good video call on Meets. I've had a few Meet calls where over time the resolution and bitrate would be reduced to such a low point I couldn't even see the other person at all (just a large blocky mess). Whereas Teams (for all its flaws) normally has no major issues with the video quality. Teams isn't without its flaws and I do occassionally fall back to ZOom for larger group video calls but at the end of the day Teams video calling sort of just works fine. Not great but not terrible either. YMMV of course.
View on HN · Topics
I've had the complete opposite experience. Meet has been rock solid for me whilst Teams has been an absolute nightmare. The thing is though both Meet and Teams use centralised server architectures (SFUs: Selective Forwarding Units for Google, "Transport Routers" for Teams), so your quality issues likely come down to network routing rather than the platforms themselves. The progressive quality degradation you're describing on Meet sounds like adaptive bitrate doing its job when your connection to Google's servers is struggling. The reason Teams might work better for you is probably just dumb luck with how your ISP routes to Microsoft's network versus Google's. For me in Sweden, it's the opposite ... Teams routes my media through relays in France, which adds enough latency that people constantly interrupt each other accidentally. It's maddening. Meanwhile, Meet's routing has been flawless. But even if Teams works for your particular network setup, let's not pretend it's a good piece of software. Teams is an absolute resource hog that treats my CPU like a space heater and my RAM like an all-you-can-eat buffet. The interface is cluttered rubbish, it takes ages to start up, and the only reason anyone tolerates it is because Microsoft bundled it with Office 365. Your mileage definitely varies... sounds like you've got routing that favours Microsoft's infrastructure. Lucky you, I suppose, but that doesn't make Teams any less dogwater for those of us stuck with their poorly-placed European relays.
View on HN · Topics
As someone who worked on Meet at Google, it seems that it could have been networking to the datacenters where the call is routed from, some issues with UDP comms on your network which triggered a bad fallback to WebRTC over TCP. Could also have been issues with the browser version you used. Since Teams is using the very old H264 codec and Meet is using VP8 or VP9 depending on the context, it's possible you also had some other issues with bad decoding (usually done in software, but occasionally by the hardware). Overall, it shouldn't be representative of the experience on Meet that I've seen, even from all the bug reports I've read.
View on HN · Topics
> Before you build, exhaust the question: “What would happen if we just… didn’t?” Well said! So many times I have seen great products slide down. If they just froze the features and UI, and just fixed performance, compatibility and stability issues for years, things would be better. (this applies to any company). Many programs I use are years old. They are great programs and don't need constant change! Updates can only make it worse at that point (minus critical security issues, compatbility, performance regressions)