02 April 2018
We developers love to write code. We go to conferences to learn about the latest techniques and frameworks. Then when we get home we can’t wait to apply what we’ve learned. There is no better feeling in the world than to push some new (working!) code to production and see it run to benefit other people. However, often before we start coding, our bosses - whom have just come back from a trade show - are asking us about buying the hot new product that all their friends have been talking about. According to the whitepaper and the online product demo, it does everything you’re looking to build plus more! These days the developers have a significant say in these decisions (We just get stickers instead of fancy dinners). s To code, or not to code, that is the question. As in most engineering decisions "It depends …" But before we evaluate, it’s important to understand the bias at work on both sides: the hidden costs of free software and hidden costs of paid software.
Let’s face it, Open Source Software (OSS) has won. The world is filled with high quality open source libraries for developers to use without having to pay license fees. I love open source. I contribute to open source projects at the Apache Software Foundation (ASF) and the way I do business would not be possible without it. One of the first things that drew me to open source was the fact that it was FREE! There was no reason for me to ask my manager for budget, I just had to make sure that it had a business-friendly license (Like Apache 2.0) and I was off. Free is a rather amazing price point that defies traditional economics. One of the studies from the book "Predictably Irrational" by Dan Ariely shows how dropping a products price relative to anothers can make incremental changes in consumer preferences … that is until one of them becomes free. Moving the price to free can cause a massive shift in the free products favor. But this all makes sense, right? Free means I can have as much as I want. And that’s where our free bias starts to cloud our judgment fellow developers. The problem is that even free has a cost.
The most obvious cost is our time. While we’re building something new, is there something else we could be doing with our time to add value? This is called opportunity cost and often goes unnoticed until the end of the year. That’s when we realize we’ve been chasing shiny objects rather than working towards our goals. The cost of a developer’s time is generally the largest expense on a project. Compare the annual cost of a developer to a company compared to a large server. Bring that up the next time your boss wants to have an hour-long meeting to talk about infrastructure. The meeting probably could have paid for a month of hosting! But we’ll save that conversation for another day.
Deciding where we should be spending our time ends up being really important. To complicate this further we tend to overvalue the things we’ve spent our own time to build. This has been dubbed the IKEA Effect  and is discussed in another book by Dan Ariely "The Upside of Irrationality". The IKEA Effect can cause us to try to hold on to our own pet projects when better options are available (and cheaper). I can’t count the number of home grown responsive web frameworks and content management systems I’ve seen companies hold on to at the beckoning of the project’s long since promoted original developer. Folks, unless your company has found a way to monetize these systems, your pride is costing your company money. So much goes into creating and maintaining a piece of software. The cost of owning your product’s dependency tree is often under-estimated. That pang of fear you experience whenever you change a dependency version or switch to a new runtime comes from the fact that you realize that a simple change could be bringing in hundreds of lines of new code. This means taking some time to match up library versions to make sure the entire application is compatible. Platforms like JavaEE (now JakartaEE), Spring Boot and Apache Karaf try to lower some of these costs by providing tested library combinations that just work. The Java ecosystem is famous for its backwards compatibility. But it still may take some time to upgrade these platforms to newer versions.
Open source projects also vary on maturity and complexity. New or immature projects may require a little more work to get started. The ASF has a couple different ways to signal maturity. The first is the Incubator, which doesn’t always indicate that the code is not production worthy but does indicate that the project is new to the foundation and its processes/culture, aka The Apache Way . This is important since time has shown that projects that adopt the Apache Way seem to have more staying power than ones that do not.
But even a project that has graduated has different levels of maturity. The Apache Maturity Model  might help you frame the conversation around adopting a new piece of OSS for your organization. Some OSS platforms are complex regardless of their maturity. Many of the Apache big data platforms (think Apache Hadoop, Apache Spark) and a number of new incubator projects (Apache OpenWhisk comes to mind) require significant distributed system experience to scale up and debug properly. So even though these projects offer incredibly cool functionality, most companies don’t have enough maturity in their engineering organizations to handle hosting this complexity. In that case it might be better to outsource the hosting to a specialist and just focus on the client code. Part of the power of open source is having the option to bring things in house if/when engineering matures and it becomes viable from a cost standpoint.
Finally, there’s the hidden cost of project abandonment. If the community around a project goes away, you might be left with security holes and old transitive dependencies making upgrades and maintenance difficult. Mitigating this risk requires spelunking a project’s mailing list or checking github for activity. Or perhaps even getting your company itself involved in the community! Some foundations like the Apache Software Foundation also have formal processes around monitoring project health so a project retiring to the attic is never a surprise. So even open source has costs and it’s important to weigh those costs before deciding to move forward with a project. Even free as in free beer has a cost, something I’m all too familiar with ;).
Perhaps I’ve given you enough reason to at least hear your manager out on the product from the trade show. You might even like the idea that you can just take the cost of the software off a price sheet. It sure beats trying to estimate hours! But now that you’ve decided to pay for software, be it in the cloud subscription or an on premise license, is the only price the sticker price? Probably not. In the end it’s those additional costs that add zeros to the end of project costs.
One of the most troubling criteria I have seen to evaluate proprietary software is ease of customization. Oh, you don’t like the way this works? Well you can go in there and write the code to change it. Developers feel right at home with this. But did we include the hours required to customize in the original build vs buy decision? Did we consider the cost of keeping those customizations when the product changes and evolves over time? If you did not you may be grossly underestimating the cost of the purchased software. Consider a $500,000 software license compared to the cost of a team of developers. Customizations will be the expensive part and we need to price that in. This favors the kind of software that works "Out of the Box".
In addition to that, the higher the investment cost, the more invested we get in making sure it meets expectations. This is known as the Sunken Cost Fallacy . The more we invest in paid software, the more effort and customization we tend to put into it to make it work. This can create a vicious never-ending cycle of paying for software, ultimately resulting in bringing in expensive consultants to make it work. Then when the expensive consultants screw it up, we bring in more expensive consultants to fix it (I may have played that game before). In the end it’s important to decouple your already paid investment from the expected benefits at each phase of the project. This is easier said than done since it requires us to be able to swallow our pride and admit defeat from time to time. But isn’t not delivering a product that wastes $100k better than delivering a product that is DOA that wastes $1M? I think so.
Expertise can be hard to find as well, which delays implementations for months while waiting for folks to roll off projects. Once again, Opportunity Cost! One creative way companies have dealt with this is by open sourcing a community version of the product but keeping the tooling and operational aspects of the product closed. This way at least you can often produce a working POC prior to deciding to pay for the software. Then when things start to scale up the sales person gets a call. Scaling, however, can also lead to unpredictable costs based on how you’re paying for the software. Is it by cpu core, machine, per request? Hold on while I pull out my crystal ball! These can also manufacture engineering problems that you might not have with open source. For example, let’s say you only paid for 10 cpu cores for the database, but you spend hundreds of development hours optimizing queries. Or paying per request in the cloud made sense, until that DDOS attack didn’t just cost you sales, it increased your bill to Amazon.
Another hidden cost to organizations that rely heavily on purchased software is what I call engineering atrophy. Atrophy is what happens to muscles when you stop using them. They get weak and flabby. The same can happen to a company’s engineering teams if vendors are doing all the heavy lifting. This can get to the point where all the engineering teams are just placing support tickets or getting trained on the vendor’s next product. Good engineers want to be solving hard problems. The engineers that stick around to manage vendor relationships are generally ill-equipped to handle the challenge of a migration or bringing software back in-house. I’m not saying you need to build everything, but if all you’re doing is buying, it will catch up to you. Make sure to set aside challenging projects for your teams that add value to your core business. If you don’t keep working those engineering muscles, you are setting yourself up to be bullied by your vendors.
Lastly, when we buy a product we have to give up some control. You want to keep your indemnification? Better patch on the product schedule. Want to be on the latest Java version? Gotta wait for it to be certified or added to your serverless cloud offering. Need system level logs for debugging? Send a ticket … we’ll get to it eventually. When things are running smoothly these risks often go ignored. In fact, when a product or solution fits the problem, "Out of the Box" purchasing can be a great choice as many of these problems I’m calling out do not exist. However, without carefully considering the hidden costs, your team’s budget may be allocated for the next 5 years.
I think it’s fair to say as a developer that we’re not paid to code. We’re paid to solve problems that add value to the businesses that support us. That’s what keeps the money flowing to our bank accounts! It’s important to consider all the costs going into our build and buy decisions whether we’re going with open source or with paid solutions. Whether you’re returning from a conference or chatting with your trade show loving boss, remember: There’s no such thing as free lunch!
14 March 2018
As developers, we pride ourselves on being the Spocks of our companies. We’re supposed to be cool and stoic in the face of even the most difficult problems. When we are faced with slow websites, we measure before we optimize. When we are given impossible dates, we deliver what is essential rather than what is perfect. When Product Managers come to us with a flashy new idea, we ask for the data to justify the cost of building said idea. These are all things to be proud of, but it’s exactly our belief that we are completely rational that blinds us. Even developers can be utterly irrational with bias, self control, and economics in our decision making. The title of my post is a parody off of Dan Ariely’s "Predictably Irrational" that explores the causes of irrational decisions in everyday life. My hope for you as the reader is to be able to identify these situations and make sounder decisions. Hoping to make a bit of a series out of this so enjoy!
Not Imagined Here
Many of us have heard of "Not Invented Here" syndrome, the practice of rewriting code that exists in Open Source or other intra-company teams. This behavior is driven by the assumption that because "I" wrote it then it is superior to code written by others. Not Imagined Here is roughly the same but is rather our resistance to the idea we don’t think of ourselves. We are often more critical of ideas that didn’t emerge from our own minds and we often take pleasure in proving someone else’s idea wrong. Critical discussion of an idea is almost always a good thing. Where it goes bad is when it becomes more about "winning" than actually assessing the strengths and weaknesses of the matter at hand.
Consider a new idea a co-worker is pitching to you, you suddenly feel a sudden urge to stop them mid-sentence. In a split-second you’ve made a decision about the merit of an idea. But that’s ok let my reasoning is sound, I can justify every bit of it to you. What just happend here? Do we all posess super human developer reasoning skills? Unfortunately we do not. In the book "Thinking, Fast and Slow" Daniel Kahneman breaks out the decision making processes into 2 Systems. System 1 is our instinctive reflexive system where the vast majority of our decisions are made. System 1 is tuned for speed and survival (No GC here!). System 2 on the other hand is home to our reflective and analytical processes. Most of our conscious and rational thought occurs here. Unlike System 1, System 2 requires some effort and the occasional Ctrl-C to prevent mental stackoverflow. This makes System 2 impractical for every decision but System 2 is really good at justifying decisions made by System 1. Now with that understanding back to that idea our co-worker is presenting to us. That urge to stop their idea mid-sentence and tell them it’s a waste of time… System 1. However our defense of our conclusion drawn from System 1 is often vigorously defended by System 2 because we’ll we can’t appear to reject an idea on a whim. Spock would never do that! So our arguments against the idea are often very reflective and thought out. But here’s the kicker, you’re defending a gut reaction to something you may not fully understand. This frequently leads to sub-optimal decision making within teams as well a reciprocal resistance to your ideas.
So how do we counteract our own mental processes? I think the first step is to start with goal of understanding the other person’s position before trying to place judgment. This can be difficult because we need to at least for a few minutes accept that the other person’s idea may be true/correct. Restating the idea in your own words can often be a great starting point to the conversation. Ask some questions around parts that are not clear. Next, admit the potential benefits to the other person’s idea then start identifying the assumptions that are required for the benefits to be realized. This allows you to identify potential challenges to the idea without being critical of the idea itself. It also allows you to realize that the idea may in fact be a good one if those circumstances are probable.
Now what about when it’s your idea and you feel you’re getting unfair resistance? You can play the same game with your ideas. Identify your own assumptions and see if you can find some common ground with the other person. If the assumptions are different and there is no common ground, it may become obvious that both parties may not be addressing the same problem. It’s very hard to accept a conclusion when you don’t agree on the premises. You may also find that you’ve made a bad assumption in your thought process. If none of this works you might be getting trolled. Lets face it some folks just like to argue and prove folks wrong to make themselves feel bigger (I can’t stand these people!). In cases like these there’s little value in continuing the conversation. Your time is best used influencing others in the organization. And if you’re convinced you’re right, just write the code! It never ceases to amaze me how often working code helps move ideas forward.
06 November 2012
I’m a sucker for epic movies. I especially enjoy the epics where regular people are put in extraordinary circumstances. Heroes with no super powers, or unlimited resources, are forced to face off against seemingly insurmountable odds. And after a long struggle, the heroes emerge victorious. These stories inspire me and drive me to be more like the heroes. But how can someone who writes code for a living aspire be a hero or even resemble the people in these movies? Most people see coding as a docile and individual activity occurring in a tranquil setting. There’s nothing epic about typing words on a keyboard…or is there?
Many enterprise software projects run much like the epic war movie We Were Soldiers. A company tasks a few battle hardened veterans to lead a whole bunch of young, inexperienced, and often delusional privates into a dangerous high risk situation. There’s incomplete information, constant churn, and careers hanging in the balance. Enterprise software development is jungle warfare. And jungle warfare requires heroes.
So what does a hero look like on a software development project? The Hollywood depiction of a rock star developer is the guy pulling all-nighters to write thousands of lines of cryptic code that leaves mere mortals awe struck and dumbfounded. Other developers are afraid to even touch the code assuming they are even smart enough to comprehend it. Sound about right? Well I’m going to challenge that perspective. Developers like the one described above do exist in our world, but are they heroes? Well… yes, but they’re more like the guy who sacrifices himself to cover a grenade early in the movie. In general you want to avoid creating situations where this type of hero emerges. Gen. George Patton said it best when he said: “The object of war is not to die for your country but to make the other guy die for his.” The type of hero described above often allows the development project to hit the date, but creates an unmaintainable mess for everyone else. The survivors of the grenade attack are happy to be alive, but they’re fighting a man down for the rest of the movie. Shit happens and if a grenade rolls by it’s important to have people in the group that are willing to jump on it. But I wouldn’t want to create a culture out of it, because after a while you’ll run out of people.
So let’s talk about the heroes that I’d like to see more of: the hero that confronts danger head on, inspires greatness all around them, and does what’s right instead of what’s easy. I call them Hands-on heroes.
Most of my favorite epic movie characters can be classified as Hands-on heroes: William Wallace (Braveheart), Maximus (Gladiator), and Hal Moore (We Were Soldiers). None of these men are above the fray. They stand side by side with their men and face-to-face with whatever adversity is in front of them. To bring this back to the world of software development, these are the men Fred Brooks refers to as “Thinker Doers”, the rarest breed in the software world. These are results driven leaders that have abandoned the pursuit of the ivory tower. Instead they embed within the team actually doing the work and contribute directly to the end result. For these folks failure is not an option.
The thought of putting leaders in the trenches flies in the face of most organizational structures. The best are supposed to rise to the ivory towers and scale their efforts out. They’re too valuable to put in harm’s way. However, the ivory tower removes leaders from the situation on the ground which distorts their reality. There is a difference between moving figurines around a map and giving orders while bullets are whizzing by your head. Just like there is a difference between moving people around on a spreadsheet verses dealing with individuals face to face when allocating work. The argument could be made the other way that the ivory tower allows people to think more strategically by removing the person from the situation. It’s true that it can be harder to make rational decisions on the ground especially you’re close enough to the people for them to pull on your emotional strings. But is the purely rational decision always the best one?
Jonathan Haidt makes some very interesting points related to decision making in his book The Righteous Mind. Haidt cites extensive research demonstrating that strategic reasoning follows intuition. So the person in the ivory tower is still going to first follow their intuition, but without having a relationship with the people impacted by the decision there’s not going to be much weight given to the decision’s effect on the people involved. Therefore the decision appears to be reached more rationally. The problem with this mindset is that not weighing the emotional response to decision making (even in a purely technical one) carries some consequences. Why can’t we just consider the emotional response as another piece of data? It’s easy to try to distill technical problem solving down to a problem of reason. However, as Gerald Weinberg states in The Secrets of Consulting “it’s always a people problem”. So decision making without proper consideration of the “people problem” is likely not going to lead to an optimal decision. These problems are best understood by the Hands-on heroes since they’re on the ground floor. It’s on the Hands-on hero not to give too much weight to the personal reactions. The overall success of the project is still the top factor.
Another staple of epic movies is not only the hero stepping up their level, but inspiring people around them to go above and beyond what they believe possible. Organizations that are under constant pressure to create innovative technical products or services require technical expertise. And unless they’re willing to pay a lot of money to consultants and expert hires they need to figure out how to grow new technical people internally. Hands-on heroes don’t buy the people around them, they grow them. In Gladiator, Maximus is surrounded by a group of fellow slaves that function as a small army by the conclusion of the movie. Some people are self-driven and will grow themselves without direction. Others need a little push. Both types can benefit from the leadership of Hands-on heroes.
For many new developers, understanding where to start can be challenging. Pair-programming is a hands-on technique that can be used to allow a team member to absorb knowledge by watching how a more experienced person solves a problem. This also allows the hands-on leader to observe the other person’s work strategies to suggest improvement areas. On the other side self-motivated technical people can get stuck in the mentality that they have to learn everything that’s out there. This can lead to the creation of a technophile. Wikipedia defines this type of person slightly different than I do (http://en.wikipedia.org/wiki/Technophilia ). I define a technophile as someone who learns a bunch of technologies for the sake of knowing them, but never really connects the dots to build platforms. A hands-on leader can differentiate a platform built on buzz words from a platform built on complementary frameworks. Complementary frameworks allow developers to focus on the domain problem instead of technical problems; buzz word platforms allow people’s resumes to grow. Hands-on heroes inspire people to grow skills that complement each other. Skills must be learned to serve a purpose and those skills must be constantly improved. Teddy Roosevelt was another Hands-on hero, he articulated this idea quite well when he said, “Power undirected by high purpose spells calamity; and high purpose by itself is utterly useless if the power to put it into effect is lacking.”
Finally hands-on heroes believe in karma. They don’t do the easy thing; they do the right thing. Doing the right thing is often a learned behavior; many times it’s because they’ve seen where the easy path has led. It would have been easier for William Wallace to admit treason to the English and receive a quick death, but the legacy of his quest for independence would have been tainted. For developers, it’s easy to deliver a project without unit tests if you know you get to walk away at the end. It’s easy to write methods with cyclomatic complexity over 20. It’s easy to Ctrl-c and Ctrl-v. But if you stick around your community long enough, it’s a matter of when (not if) it comes back to bite you. Code has karma, bad code perpetuates itself. But so does good code. Anyone that’s walked on to a project that has 80% code coverage knows the guilt you feel when you add a class without writing tests for it. Hands-on heroes know that the things they produce are meant to live on well past their time on the project. They understand that doing the right thing perpetuates others doing the right thing. They give others something to aspire to and leave a legacy worth remembering.
Delivering great software can be epic. As in all epics, heroes must emerge for the good guys to win. The current trend in business today is to remove the heroes from the fray and put them in ivory towers. The intent is to better scale out the hero’s talents, but often this only causes their efforts to be diluted. Battles can be won or lost in the trenches. It’s important to ensure that the right people are in these positions. Great software requires Hands-on heroes.
31 January 2012
As the super bowl nears you can’t avoid constant bombardment of advertising, predictions and analysis leading up to the game. You can either fight it or embrace it. I personally love the sport so today I am embracing it in the context of web development.
Football like web development is driven by a number of specialist chasing the same goal. Each specialist bring unique talents to the table to achieve success and weakness at any position or bad overall chemistry can lead to failure. So lets take a look at some of these positions and see how far we can take this analogy…
First your engineers are your offensive line. Typical of offensive linemen they do a lot of the heavy lifting and often don’t receive a lot of attention. And for many that’s just fine. You might have some rockstars here or there but success is measured by how well the unit as a whole performs rather than individual performance. Case and point a rockstar programmer’s elegant but confusing design might weigh down the team instead of lifting it up. Most projects are too large for your single rockstar to handle themselves so unless others can work with the solutions they develop the team is often better off without them. Unfortunately engineers like linemen have difficulty tracing their work directly to outcomes. Often a job well done just enables another person to do there job upon which the project’s success hinges. However on the flip side a poorly done job can blow the whole project up. Having a solid engineering team is important to enable the rest of the team to do their job. Like the best units in the NFL good engineering teams are experienced, communicate, and they understand what the guy next to them is doing (cross trained).
Your User Experience and Front End Development teams are your wide receivers. These folks do work that is very visible and often takes a great deal of skill some of which is hard to quantify. Some people seem to have it and others don’t. Given their position they have the opportunity to make huge plays. They are given some freedom on the routes they take when the requirements change. But one of the issues of playing exclusively outside the hash is that they often don’t understand some of the other things going on in the game. There is sometimes conflict when they’re not getting thrown to enough. Sometimes the game plan calls for establishing an architecture that may prove some of their designs difficult to implement. However there are many receivers that do understand these things and are willing blockers and they work to understand all aspects of the game. There are even some UX/FED developers that dabble in engineering (and vice versa). These folks are the rare hybrids that I’d label a tight end. Like the best receivers good UX and FED enjoy the visibility of there work, they work to align to the strategy being employed in a given game and they almost never drop the ball.
Your Development Leads are your running backs. These are the folks that carry the load and run head first into problems all day long. There are some that do this with some flash and others that just grind it out. There is some glory at these positions but it’s also heavily criticized. A receiver might drop a few balls and still have a good game. However if a running back fumbles only once every few games it’s a black mark. This is also a position that can let success go to it’s head. Most of the great running backs understand that they are dependent on the rest of the team. It’s not surprising that the great Walter Payton was known to take teammates (especially his linemen) out to dinner after big games. Like great running backs the best leads spread the credit around.
Your PM and Delivery Managers are your Quarterbacks. These folks are calling the shots on the field and determining who needs to be where and when. They are judged on some statistics but at the end of the day it’s how many times did they win the big one. Although unlike football in development you often have more than one person in this position on a project. Just like in football this causes problems since there’s nothing like a good quarterback controversy to screw up team chemistry. One way to make this work is to separate the roles so that you’ve got quarterbacks by committee. But even that’s not a silver bullet. In the end the most successful approach is to get them to understand that they are all on the same team working towards the same set of goals. Easier said than done. The best quarterbacks are the ones that can spread the ball around and always end up with the win (even if it’s not pretty). Those are the types you want in those Project Manager and Delivery Manager roles.
Quality Analyst and Business Analysts comprise your defense. You might have the best offense group in the game but if you’re not delivering what your customers want you still end up losing. Great defenses are dynamic and can adapt to take away or contain different things that an offense may throw at it. On any given project there’s almost no way you can cover all the test cases or capture all the requirements but if you can determine what the most important ones are you’re usually going to be just fine. Good analysts need to be relentless, have great instincts, and sometimes even a little unorthodox. From my experiences there are a lot of places that have good serviceable analysts however very few that have truly great ones.
Finally Operations is your special teams unit. Unfortunately these are the guys everyone forgets about because it seems like their job is automatic… until something goes wrong. Operations teams need to be extremely discipline and need to be composed of unique individuals. You don’t find a lot of people willing to run full speed with a ball with 11 other guys running full speed at them. Just like you won’t find many people that are able to fix production defects at 3 AM while the rest of the development team is asleep. Ops teams often have very specialized skills that are not always found in other positions in development teams. Good operations folks are selfless, a little crazy, but when then time comes they deliver (almost automatically).
I hope you’ve enjoyed my ode to the super bowl. Unfortunately I will once again be watching a Bear-less Super Bowl. But I can always root against the Patriots. Go Giants!
01 December 2011
A shocking thing occurred to me the other day while I was reading Catching Fire by Suzanne Collins. I started thinking about ethics in an unexpected way. The book and it’s predecessor Hunger Games have themes that touch on a number of moral issues around killing and exploitation that are very troubling. As my mind drifted past those questions I asked myself could this really happen sometime in the future? I felt a gripping terror when I rationalized it as entirely possible. The most frightening aspect for me is the power of big brother watching your every action and the control they could have over everyday life. Much like in Nineteen Eighty-Four by George Orwell people with unprecedented access to your daily interactions have the ability to manipulate you and your peers. This type of control was never possible in the past. Kings could rule with an iron fist, people could be watched and controlled but only to a certain degree. Thanks to technology, mainly software, the rulers in the above stories are able monitor your every move. And right now as we speak in the real world, this type of technology is being developed and could soon become a reality. The potential impact of this technology is frightening.
As someone who writes software for a living I think it’s clear that we need to step back and think about these things. Gerald Weinberg writes about software ethics in the last chapter of his landmark book The Psychology of Computer Programming stating that software can be used for good and for evil. He mentions an example of how the computer has provided for great advancements in how we store, query, and represent information. However he points out that for all the good that has come from computers just think of how much more efficient the Nazis would be if they were able to use them to track people. Add a database with a couple table columns, an agile methodology, and they could track the velocity to wipe entire races of people off the map. Chilling…
Today we have drone aircraft flying around all corners of the globe, facial recognition software, video cameras mounted in numerous places to do things from tracking motorist behavior to tracking terrorists. As free people we have to ask ourselves what controls are in place to protect us against these things if they fall into the wrong hands. What actions do we have to protect ourselves from misuse? What is the point where protection intrudes on our basic rights as people? Is my current project making things worse?
So what’s the point of all this? The point is that as a technologist you have a great responsibility to think of these things and ask yourself what your work is really amounting to. You have the power to change the world and become famous for it. But before you change the world have you asked yourself if the world you’re creating is one you’d like to live in?
29 June 2011
Probably one of the last things a person may associate together is a software developer and a weightlifter (there are exceptions). However there are techniques in weight lifting that I believe can be applied to a software developer’s regime to enhance performance. I’m not talking about steroids although caffeine is probably the closest thing to steroids for a developer. I’m talking about periodization.
Periodization is a weight lifting system that’s purpose is to prevent a lifter’s strength gains from leveling off or plateauing. Plateauing often happens if a lifter uses the same training routine for an extended period of time (same exercises at the same targeted number of reps). The lifters muscles will adapt to the routine causing strength gains to level off. Therefore strength experts recommend periodic mixing the routine up every month or so to prevent adaptation and achieve peak performance. For example the book Complete Conditioning for Football by Michael J. Arthur and Bryan L. Bailey recommends alternating between routines focusing on increasing muscle size consisting of exercises with 10 reps for 3 sets with 1 min rest increments and muscle strengthening exercises consisting of 5 reps for 3 sets at higher weights followed by a peak phase of sets of 4, 3, and 2 reps each at very high weights just prior to the start of camp. The transitions between periods are often pretty rough as muscles need to adapt to the addition loads or intensities. So you can count on being pretty sore for the first week or so. However despite the discomfort this is type of program is backed by extensive research to enhance performance by allowing a lifter to experience more continuous improvements in strength. But could these same principals be applied to mental activities?
So how can you implement such a program? First its sounds like a hard sell to take someone who’s a rock star at one thing and put them in foreign part of the system. Who’s going to cover for the SOA rock star while he’s out experimenting in GUI land? Sometimes it’s surprising to see who steps up when the resident rock star is out. In the end the organization benefits from improved performance from its people as well as some knowledge redundancy that can come in handy if the department experiences some churn. So in those terms it’s really a win-win. What are the appropriate period lengths? As a general rule it takes roughly 3 – 4 months working with a technology on a daily basis to get a sense of it. I’m talking in the general sense not in a mastery sort of way but clearly beyond the hello world stage. This is generally when most developers can begin to apply the technology without needing guidance from someone already competent in the technology. This is however the level of understanding that gets you the most bang for your buck. Having a general understanding of a technology allows you to be conversational with other developers and incorporate that knowledge into your designs of a complimentary technology effectively.
So be wary when you think you’re at your peak. You could just be standing on one of many plateaus. Change things up or you might find your development muscles have atrophied!
21 June 2011
I happen to be rereading The Black Swan by Nassim Taleb and started to thinking about the black swans that I run into in the world of software. Software embodies the essence of the black swan which consists of the following:
The event is a surprise (to the observer). The event has a major impact. After its first recording, the event is rationalized by hindsight, as if it could have been expected (e.g., the relevant data were available but not accounted for).
Bugs are always a surprise to a developer. Only an evil developer would purposefully introduce bugs to software that was knowingly going to be shipped. Because of this software must proceed through rigorous testing prior to shipping to attempt to eliminate these surprises. Often prior to such cycles a manager asks the developer "Is this code complete?" to which a confident developer will eventually reply "Yes". Subsequently testing commences and to the developers surprise the next morning a list of bugs appear in his/her inbox or worse the testing reveals the application is completely untestable. Often the later is caused by some minor configuration that was missed when deploying the application in the testing environment. This satifies the second requirement of the surprise event having a major impact. In software a missing charactor can devastate an entire program. Compiles and unit testing frameworks eliminate some of these more obvious bugs but there are still the bugs that only emerge in the harshest conditions such as a heavy load or concurrent access. These are difficult to test and often hard to fix. Additionally because software is so scalable (you can just keep selling the some piece of code over and over with very little cost) a bug found after shipping needs to be fixed perhaps on millions of individual machines. Finally after the bug is revealed or fixed how many times has a developer quickly rationalized that this event obviously occurred because of someone else’s work or an unexpected use case. If they had known these things the software would have been coded correctly. In fact often fixing one bug introduces another (Fred Brooks estimates this at 50% on the higher end) so these after the fact explanations often do not hold weight.
Software development is always striking a delicate balance between enhancing productivity and making systems understandable. AOP is one example of a programming technique that has significant power to append code to various parts of a system based on rules. This technique can provide productivity gains to an individual coder however it can also introduce black swans in your software. One of the top uses of AOP is to adding logging throughout the system. This is successful because logging does not in anyway modify the state of the system so appending code that just records what has happened is relatively harmless. But what if a clever program devises AOP rules that modify the state of the system. For example updating a required field to all objects of a certain type when called by a method that starts with "update". From on aspect the design is elegant and clean since the code is "doing something for free" for other developers. The problem is other developers may not understand these rules or even know that these things are happening. So if a developer goes to update that field within there method and upon exiting the method is has not changed they may not have the understanding of the system to be able to debug. Magic has happened which is a very uncomfortable feeling in the mind of a developer.
A second example is Java is concurrency. Java’s memory model is implemented to optimize the speed at which concurrent events can happen on say a website like eBay. The problem is developers tend to assume that concurrency is handled for them so when 2 users bid on the same item at the same time the JVM knows what to do. And it does but for the sake of speed Java sometimes has multiple copies of an object in memory and unless you tell it to publish the result to all of those copies as is necessary for concurrent programs to work "weird things" start to happen. Another black swan.
Finally there is the case of the JDK upgrade in which there is potential for the foundations of the language to change. Suddenly performance tricks implemented in the old version no longer work. Memory model changes may expose lurking race conditions that were always there but just never came up. And on the flip side certain bugs and performance issues may even vanish due to fixes to the underlying core code. All of these likely make software development equal if not more black swan prone that even financial markets. Some of these are unavoidable but others can be mitigated. Fred Brooks has so far been proven correct in that order of magnitude increases in software development productivity is hard to come by. Though he is really talking about the process of creating software and not exclusively the tools we use. I think we need to take inventory of the tools we use and ask our selves the question: "Where am I creating black swans in my code?"
31 March 2011
I hate black boxes. For folks that don’t know what I’m talking about imagine a gumball machine made entirely of steal (no glass). The first time you put a quarter in and the first time you get a gumball. Wanting 2 more gumballs you put in 2 quarters in and out pops a hairball. Frustated the next time you put a dime in, kick it for good measure and out spouts a cheeseball. Blackboxes are frustrating and risky in software development for the same reasons. You input a value in order to obtain an expected outcome then when it doesn’t work or even worse works sporatically you’re left scratching your head to what to do next. You read the documentation but the thing is not working as you expect it to.
For a business this is huge risk since you never really know if you’ve covered all the scenerios. Producing production grade software is like walking through a mine field. Untested code fragments are mines. With closed source you really have no choice but to send folks in with blind folds on. You have no idea if you’re going to prod some live rounds in the ground. With open source it is possible to use code coverage tools to measure how much of the code you’ve run thorugh. So in that case you’re still in a mine field but you’ve got a metal detector to guide you along the way. But even with open source there are always cases were the environment is interfering or perhaps something is going on with the hardware (plastic mines!). Even stuff we think we know is often built on top of black boxes.
I’ve had a couple extremely frustrating experiances working with black boxes. The first was trying to build a message board with Microsoft Frontpage back in 2000. They had this great interface that allowed you to generate a message board with a wizard and embed it into a page. I added it to my fraternity’s site and quickly decided I wanted to tweak the thing a bit. I was looking through the code tinkering with config parameters for hours. Finially i discovered that all this "stuff" was going into an executable that completely mystified me. I was completely stuck. That’s probably what drove me away from Microsoft to Java when it came to web development. When I started out a lot of libraries were still black boxes to me. I hadn’t learned enough about the language yet. As I became a more seasoned developer I learned that if a couple quick searches on message boards didn’t solve the problem then next best thing was to just pull down the source or even use a decompiler to see what made the library tick. It’s a huge security blanket dealing with open source software since you can always pull back the curtain and see what the Wizard (pun intended) is doing. And that got me pretty comfortable for a while until I met a new black box … SiteMinder.
SiteMinder and I had our first encounter in 2009 but it wasn’t until I got into consulting last year that I saw how much of black box the product really is. SiteMinder is a security product that protects websites from unwanted visitors by blocking pages that user should not have access to. The package comes with it’s own templating file type known as FCC’s that allows a programmer to create custom pages to collect login credentials and change passwords. It also comes with an SDK that allows you to send commands to it using Java. And though it uses open source web servers (Apache) to render the FCC files and contains a published JavaDoc for the sdk the core program is all proprietary and poorly documented. Black box. The company that I was consulting at for the job hired another consultant that specialized in the software. This is a common strategy. If you have a black box hire somone who’s used it before. For common tasks this worked quite well. Things that this consultant had done previously happend very quickly with very few problems. The issues came when we started doing things the consultant had not previously encountered. His first reaction "Hey the samples that came with the black box worked so it must be because you’ve deviated from it" or "Since you’re doing it slightly different that I’ve done it on previous projects that must be the issue". When I pressed for an explaination why most of the time I could not get a straight answer. This led to a great deal of frustration and finger pointing. In some cases I found another way that was closer to the consultant’s previous experiances to get things to work. Other times I complied a number of experiments to defend my team’s code and a few times he discovered some new secret configuration option that solved the problem instantly.
So what did I learn from all this? The only way to understand a black box is either to find a way to open it or through rigorous experimentation. Decompilers and open source software are great things to combat black boxes. However more than likely at some point in your career you’ll run into them. And frankly get used to it. The world is just one more black box waiting to be openned.
A couple of parting words of advice dealing with black boxes:
Start with your Intuition
Experiance trumps Intuition
Experimentation trumps Experiance
Older posts are available in the archive.