Lessons I often forget
April 9, 2017
There’s a lot of best practices that different people can follow. No one will ever 100% agree with each other, but I try to be as rationale as I can be about them all. This post is largely about best practices and principles of thought for software development that I’ve anecdotally found to help. Of course, anecdotal evidence is not good enough to accept any single bit of information, but until we have meaningful ways to measure and collect the data, anecdotal will have to do.
I learned that you should never assume that any input you ever get is valid. The moment you receive input is the same moment that you become susceptible to all kinds of nonsense. In the worst case, maybe you’re dealing with security exploits. You might get malformed data and parse it the wrong way which causes an exception which might cause a crash of your program. Many people don’t like to be verbose in their validity checking code because it makes the code ugly and expands it heavily. Some people are working on contexts where you think that the system has been designed in a way that you shouldn’t need to worry about your inputs. Defensive programming is really important when building production/client facing systems. Inputs from your user can come in forms that you don’t expect. If you are working with some external API service, the API might change things up on you without telling you. Some APIs will politely inform you of deprecation. Some APIs will just straight up change things without ever bothering to tell you. Program defensively and do so with a lot of confidence. There’s nothing that sounds more like laziness than when a system goes down and the answer to why it went down is, “because I never expected that input”. I know, this is easier said than done, but you can make very basic assumptions that go a long way in defensive practice. For example, always check for null, if you’re in a weakly typed language, check the type of the data you get. If you’re ever going to access or index something concrete, always check for it’s existence first. Some programming languages make this nicer or easier to do, but you should do it always no matter what.
Along with that first tip. You should have a system setup for receiving crash alerts and or sending yourself logs. There’s an insane number of systems out there that already do this or make it really easy. For example, I’ve used Crashlytics for various Android projects. I’ve heard of RollBar from a podcast, and I’ve also heard of papertrail from podcasts. There’s a lot of these, but you should make it really easy to get yourself information. I’ve listened to several different podcasts that featured the author of The Art of Monitoring, but I have yet to read the book myself. The conversations in the podcasts are always really interesting though so I think the book would have really promising tips. I definitely have it on my reading list.
I went overboard with writing things defensively for something and it added a lot of extra code which would get abstracted into a method call later on anyways. It made no sense for me to be explicitly writing all the error checking over and over again when I knew I would be gutting it all. When you’re building a new feature, the only metric that ever matters is that it works correctly and is functional. Write the working code first. Best practice only matters after it works.
Write your abstractions iteratively
When you refactor, refactor iteratively. The abstractions you write or the code you pull into new methods should be done one step at a time. I had the problem where I was going through code I had a heavy amount of repeated code and pulling it out into different methods. The issue came when I knew that one use case was ever so slightly different in the work that I had to do. I think for a lot of people, they would have written one method with a parameter that changed that core piece of the logic. What I ended up doing was first making a method for the first type of logic, and then I wrote the code for the second version. I actually kept it this way because the code was easier to understand this way, and the code that was duplicated in this scenario was fine. First, the code duplicated was minimal. Second, it’s not guaranteed that if I changed one that I would need to change the other, which is a big argument for writing the abstraction. I was reading a Reddit thread entitled “What unpopular opinions do you have about software development?” and I came across the following comment that puts my same feeling very concisely:
Just because some code gets repeated does not make it an absolute evil. Everything we do as software engineers is about balancing trade-offs.
Clever solutions help make you feel good, but sometimes they’re honestly just not needed whatsoever. While it isn’t directly related, I recommend you watch “How to build a business without quitting your day job”. In it the presenter, Vincent Woo, talks about building a small business which he explains is different from a startup. Anyways, one of his guiding ideas is that what you should do should be boring. I believe the same should hold for code. We spend a large majority of our time reading code, and boring code is less exciting but it is easier to understand. This does not mean that you shouldn’t use language specific features or use applicable parts of the standard library. This means prefer procedural programming over concurrent programming. This means keep your branches flat and to a minimum. This means use simple data structures and algorithms unless you’ve proven that performance is critical here. For example, I had once written a data model/state container where I had everything available in constant time because I stored everything in a HashMap. The problem with using a HashMap is removing things from the HashMap. If you don’t delete items then you effectively have a memory leak. At the time I was trying to be clever and have some logic for handling updates and changes and what not, but the code was getting messy. What I did instead was to just use an ArrayList, and every time there was an update, I’d clear all the elements and reconstruct every single one from scratch. The number of items was absolutely trivial and guess what? When I measured the difference, there was no meaningful difference in execution performance. The code just became a for loop, and retrievals were also done with a linear search. To the end user there’s no diference and the code became incredibly boring. This ties into the functionality first principle and you can listen to it being expanded on in this talk by Jonathan Blow.
Don’t prematurely optimize, but that doesn’t mean you write bad code
Premature optimization gets brought up so much and too many people use it as an excuse for writing bad code. The more experience you get as a programmer, the more it becomes clear where the line of that trade-off is. When you have less experience then you don’t know if a solution would be considered bad. I remember some code from a peer when I was in school had 3 nested while loops with flags being set to determine whether or not you had to enter the next nested loop or if you needed to kick out of it. That was really bad but that peer didn’t know any better. There’s a lot of rules for writing better code, and a lot of the time these things are also subjective or really context specific. With that being said, general rules of thumb I follow are to be aware of nested branches or loops, and to watch the number of parameters you put in a function. Those two kind of go hand. I recommend reading Clean Code by Uncle Bob to learning how to write better code, but use it as a reference, not as law. I don’t agree with everything in the book but it’ll at least help you be aware of other considerations. With all that being said, the point of this lesson is to not blindly use the premature optimization line to justify bad code. The full quote is in context to trying to write 100% of your code to be 100% as fast as possible, when only 3% of the code is critical to performance. This doesn’t mean that the other 97% can be trash.