Saturday, May 30, 2015

The Technical Bully Interview


Recently, I have been involved in a lot interviews, both from a hiring manager point of view and consulting with friends and colleagues that are looking for new opportunities.  I am always amazed at how different people conduct interviews.  Some are enlightening, others lackadaisical. Most recently, I have been witnessed a surge in what I refer to as a "technical bully".  These are individuals who fit a certain stereotype of software developer.  Unfortunately, the stereotype it is not  the Hollywood version of a lovable nerd with a big heart.  No, these folks often lack the people skills to effectively communicate and are out to prove something, usually that they are superior in technical knowledge and experience than the candidate.  While this must do something to stroke the bully's ego, I find it counter-productive to the goal of hiring quality people.  What is worse, is that these technical bully's are usually quite talented and experienced and so are often included in the interview process. 

Here are some common things technical bullies do in interviews:

1.  Act annoyed by having to do the interview.  

You only have a moment to make a first impression and if you come across as being put out and annoyed at having to do the interview you are already sending the wrong message.  Speaking of being put out.  Chances are the candidate had to take time off from his current job, get dressed up, drive to the office, get there early, etc.   Lets be honest, interviews are a pain in the ass for everyone involved.

2.  Continue to ask questions until you either don't know the answer or get it wrong.  

I am not sure if this is trying to measure where your knowledge/experience ends or if it is to guage how you measure up to them.  Either way, it generally isn't productive after you have shown competency in a subject area.  I have witnessed bullies asking dozens of questions in a specific area that is only tangentially related to the role being applied for.    

3.  Continue to ask questions in areas that you express that you don't know or don't have experience in.

Much like the second bullet point, it is impossible for people to know everything.  So when a person clearly doesn't have the knowledge of a particular topic continuing to drill the person on it is often counterproductive and can cause the candidate to become defensive.  

4.  Expect you to be able to read their mind, follow their train of thought, or understand the pain points that need to be dealt with.   Especially when poorly communicated or not communicated at all.  
Technical bullies often have their blinders on.  They are concerned with their problems and don't have the time or desire to put that aside to focus on the role's need and bigger picture.  Combine this with an inability to communicate well and you have a recipe for disaster.  I have seen bully interviewers ask questions that were so vague the candidates almost didn't know they were asked a question.  If you can't ask a question and communicate effectively, how are you going to be able to setup the employee for success if they do take the job.      

5.  Not smile.  

This might seem obvious, but smiling is the single most important thing you can do.  We spend a lot of our lives at work and most people would prefer to make that time as enjoyable as possible.  This is in direct conflict with the bully interviewer's primary agenda of intimidating the candidate.  

I try to follow a different approach to hiring talent to join my teams.  For me, the team is paramount.  The team being greater than the sum of its parts.  A high performing team not only gets the work done in a timely manner, they do it at a cost (both human and capital) that is cheaper than the same number of individuals and in a predictable/forecast-able way.  This does not mean that individuals need not be skilled or experienced, but instead that the individual needs to be able to do the job while working as a member of the greater whole.  

By putting a technical bully into the interview process, you have the potential to scare away the candidates that are fundamental to team building.  Those members that are more than code monkey's that can communicate effectively, develop software, and work well with others, they are the foundation of which team can be built around.  Interviewing is a 2-way process.  It is the brief time where you try to determine if the candidate can do the job and will not be toxic to the team.  But it is also the time where the candidate is trying to determine if the company, culture, and team will be a place she wants to be.  At face value, it makes sense to put the technical expert into the interview room to determine if the person has the chops to get the job done. However, you may want to take a moment to determine if that person is a technical bully and if the reward is worth the risks.  


Friday, May 29, 2015

Primary Roles vs Supporting Roles in Software Development Companies


There are 2 primary roles within any software development organization:  Engineering and Sales.
All other roles are secondary, but play a role in supporting one or both of the primary roles.  Secondary Roles include: product management, project management, program management, user experience, quality assurance, marketing, business analysis, technical writing, customer support, etc.

In a way this has a lot of parallel's to the concept of Pigs and Chickens in Agile Methodologies. Engineering and Sales are representative of the Pigs.  They have skin in the game and are responsible for creating the product and getting people to buy it.  The secondary roles, or Chickens, are supportive roles to help make the primary roles successful.

Surprisingly, it is not uncommon for the organization to get this wrong, which can have catastrophic consequences.  From a development point of view, the most common mistake is making engineering a secondary role.  Most often switching places with Product Management.

Engineering software solutions is half art and half discipline.  Because of this, driving solution design without input from the engineers themselves will almost always lead to a less than ideal technical solution.  Unfortunately, everyone likes to be part of determining the solution.  This is not a problem as long as people understand that there are always trade-offs that need to be made.  If time of delivery and cost are not an issue (and they always are), then we can all work together to design the perfect solution.  Too often time & money is spent in gold-plating a solution instead of building a product that delivers value to the customer.  This could be as simple as adding features or functionality that is easy to market or demo's well, but doesn't really solve a problem.  These features can be expensive to build and maintain and ultimately lead to a poor customer experience because they don't return on the marketing or sales promise.


What kind of "shop" are you?

The Sales Shop
Sell, Sell, Sell is the mantra of this type of organization.  The sole focus is on obtaining the customer at any cost and this almost always means feature development.  The Sales Shop is often a necessity for start-ups, but mature companies often get stuck in this mindset.

The Marketing Shop
This type of organization has completely lost sight of reality.  Their entire world is marketing spin and they not only produce it hoping to aid sales, but they also consume it.  The goal is not to produce a quality product, but to look good to the marketing analysts.  Smoke and mirrors are the tools of the trade of this group and a good "rating" is more important than actually satisfying the need or solving an actual problem.

The Product Shop
Product shops are focused on creating the best possible product irregardless of whether there is a market for the product.  This type of shop works under the assumption: if we build it, they will come.  This shop often gets caught in a the vicious cycle or work and rework.  Timelines can be drawn out as the solution is constantly evaluating if what they have could be better.

The Development Shop
Development shops like technology for technologies sake.  The focus is on the technology itself and can often lead to a lack of focus because of the constantly shifting and changing technology landscape.  The development shop often chases the change in search of the latest and greatest technology.

Why Java Code's 'Instanceof' and 'Reflection' have a "smell"

In my 16 years of writing code, I was privileged to work under some very talented developers.  These mentors were often some of the harshest critics of the code I was writing and I feel that they helped sculpt my code style over the years.  They all had different styles, some took me "under their wings", others took a more direct approach, still others were more like the Drill Sargent that is portrayed in the movies when the youth join the military and are shipped out to boot camp.

One of the staples of the lessons I learned over the years is to recognize what I refer to as "code smells".  Code smells are pieces of code that are either poorly written, don't solve the solution in an ideal manner, or fail on some level.  Usually, the code smell has to do with someone missing one of the basics of writing good code.

Most of my experience has been writing code in the Java language.  Java is a powerful object oriented programming language with a very diverse tool-set and ecosystem of third-party libraries.  One thing that Java doesn't do is protect you from yourself.  This means that just because the language supports something, doesn't mean you should be using it.

In my time writing code, there are 2 mistakes that I have seen over and over again.  These 2 smells often point to a larger problem within the code base itself.  They are the use of:

instanceof - In java the instanceof keyword can be used to test if an object of a specified type.  This seems harmless at first glance, but it is a code smell because it points to a bigger problem with the code you are writing.  Use of instanceof points to a failure of proper polymorphism, a pillar of object oriented programming.  Deficiencies within your object model are something that should be addressed as a top concern.  Ensuring that the logic you write meets the basics will lead to a better solution over the long term.  There is one exception to the rule.  The instanceof keyword is fine to use within an "equals" method.


reflection - in java the java.lang.reflect package is a series of utility classes that allow introspection on running code.  It is an extremely powerful concept, but generally should be avoided within production code.  I often see developers using reflection as a swiss army knife within the jvm.  Not only is this bad practice, but it has a significant code smell.  Reflection breaks several of the pillars of object oriented programming.  It overrides encapsulation and it can work around both inheritance and polymorphism.

There are many other code smells and perhaps we will be able to touch on some of them in the future.  Today we covered two of the most common coding mistakes I have seen in my career, the use of the instanceof keyword and leveraging the java.lang.reflect package of utilities in production code.

What are you producing? Software Development Artifacts.


There are four main development artifacts that I treat as top level concerns for my teams.  All 4 are required in a "definition of done" and involve some level of input for developers.  They are:

Code
This kinda goes without saying, since this is what the developer produces.  But there are several parts to the "code" artifact.  There is the code itself, which should be high in quality (consistent, simple, and applicable), it should be reviewed by peers, and it should meet the requirements of solving the problem identified.

Unit Tests (Mocked vs Functional)
I believe in the functional safety net that testing provides.  Not only does it ensure that the code is behaving the way you intended it to, but that if something changes and negatively affects that functionality is caught sooner rather than later.  The big problem with testing is that you can get bogged down in the process.  I believe in an 80:20 rule for writing unit tests.  80% of the unit tests that are written should be small in scope and with mocked dependencies.  The key is to make mocked unit tests as easy to write as possible.  20% of the tests written should be functional.  Functional tests often cross classes and more often than not include dependencies.  These are much more expensive to write because of the inter-dependencies required to be available.

Documentation
When I say documentation, I mean internal technical documentation (as opposed to end-user documentation).  Internal technical documentation is the documentation that describes the object hierarchy, interaction points, data model, and thought process used when designing and coding the solution.  This is incredibly important as it forces the developer to explain how things work and in a way validate the design.  In addition, it incredibly helpful for people follow behind the initial development effort and either have to support, maintain, or extend its functionality.  With comprehensive documentation onboarding is greatly improved and gives people that are unfamiliar with the solutions a starting place outside of the code itself.

Status
The last artifact is a Status.  Communication to the organization is incredibly important.  It is amazing how a person's perception is their reality.  In order to ensure that the individual, team, and organization is all on the same page, a status artifact is needed.  This status communicates where in the project or effort we are, what problem we are running into, etc.  Without a consistently communicated status the outside world (outside the developers themselves) will not have a view into the progress being made.

These four artifacts are so incredibly important to successful software development both in the short and long term.  I try to drive this home by enforcing that all 4 are part of my teams' definition of done.

What is Quality?

So, what is quality?

Most companies have group with the title "Quality Assurance", but is their role really to assure that the product(s) has quality? I feel that their true role is more "Functional Insurance", as the work they do is used more as a functional safety net for the products and organization than for ensuring that our product has quality.  Do not mistake, this is extremely valuable but it doesn't ensure that we have built quality.  Quality on the other hand, must be built into the product itself.  Unfortunately, quality is difficult to quantify and doubly so in the software industry. Software is often seen as part art form and as such quality can seem a subjective topic. But quality is identifiable and it doesn't come from testing (not unit, not functional, not end-to-end, not automated, not regression). Quality is built into the code itself and I believe there are three main technical concepts that make up quality software: consistency, simplicity, applicability. These three concepts lead to achieving the business needs that software be supportable, maintainable, and extensible.

Consistency:
Consistency is arguably the most important concept of building quality software. Consistency is important because it allows for familiarity and patterns to form. You can have complex systems, but if you have consistency in how those systems are written and applied you gain support-ability and maintainability withing your product you can also usually meet some level of your business needs with solely a consistent code base.  Consistency forms patterns and patterns are a developers best friend.  Patterns allow for a deeper understanding of the code and that understanding can be achieved more quickly.  Often a developer that is coming onto a project is looking for the patterns in order to understand how the software works.  Patterns and consistency are so important to software design that there have been numerous books written on well defined patterns (design patterns), that can be used to solve specific problems that are commonly encountered in software.

Simplicity:
Simplicity is the second most important aspect of building quality software. Simple code doesn't mean that you can't solve complex problems but rather to use a pragmatic approach to the problem solving when you are doing it with software. It is not uncommon to see engineers over complicate a problem and attempt to solve it in a way that ultimately makes things worse (or at least less supportable, maintainable, extensible). There are a lot of names for keeping code simple:  the KISS Principle (Keep It Simple, Stupid), avoid over-engineering, MVP (Minimum Viable Product), YAGNI (You aren't going to need it), etc.  I am a proponent of breaking problems down and solving it using simple building blocks and leveraging those building block in a consistent manner.  When we do this we gain ground with all three business needs.

Applicability:
The last area of writing quality is Applicability.  Writing software is a process that is completed through the application of a series of tools and methodologies. Using these tools, from language constructs to 3rd party libraries, correctly and efficiently is "Applicability".  Unfortunately the depth and breadth of tools and methodologies can be extremely daunting and having the knowledge and experience required to know when and how to use any of the tools available is significant.  This is one of the reasons why there is a big difference between the best engineers and mediocre ones.  Applicability is an important aspect of quality software because using the right tool for the job and using it correctly greatly reduces complexity (increases Simplicity) and helps enforce consistency.  One interesting aspect of applicability is it is not static.  It is variable and can change over time.  The software development environment is  changing with a frequency that can be difficult to keep up with.  New tools, new languages, new frameworks, new methodologies, etc. are being introduced to solve new and existing problems better every day.  This means that overtime, the code that was written and the tools that you used may no longer be the best way of solving the issue and it may make sense to leverage these new tools and techniques.

What happens when we have a quality deficiency?
Being deficient in one or more of these areas can lead to an inability to execute.  What is worse is that we often think that to increase quality we need to test and test and test some more.  But what we are really doing is trying to protect ourselves with functional insurance because of the lack of quality in our products.  When products suffer from a lack of consistency and simplicity all aspects of writing code and tests becomes more difficult which further degrades our productiveness.  This often makes keeping the codebase current difficult which leads to applicability issues and further decreases quality.  This downward spiral becomes a self fulfilling prophecy.

So how do we move forward?


Well, the first step is understanding that we must address our deficiencies.  In order to do this, we need to identify "WHAT" problems we want our applications to solve.  Once this is identified, we need to identify "HOW" we are going to do it at the macro level.  We must understand that how we solve the problems can (and in many places should) be different than the solutions we have today. Once we have the macro picture of how we are going to solve the problems, we need to focus on the micro picture and executing on the plan.  To do this, we need the time and resources to build quality back into our software.  Migrating products, functionality, and software is not an easy endeavor, but it also not an impossible one.  As we move toward the future our products need to evolve to the new challenges.  Lastly, we need technical oversight both at the Macro and Micro levels.  Oversight is used to ensure that we continually build quality into our products.  Without oversight is is easy for a single developer or even a team of developers to lose sight of the bigger picture.