In one of Jeff Atwood's blog posts I've found an interesting reference to a very succulent interview with Bill Gates. It seems ancient (1986!) & the discussed topics' range is quite wide, but there was one thing that has grabbed my attention in particular. Let me quote it to you now:

INTERVIEWER Does accumulating experience through the years necessarily make programming easier?

GATES No. I think after the first three or four years, it’s pretty cast in concrete whether you’re a good programmer or not. After a few years, you may know more about managing large projects and personalities, but after three or four years, it’s clear what you’re going to be.

There’s no one at Microsoft who was just kind of mediocre for a couple of years, and then just out of the blue started optimizing everything in sight.

I can talk to somebody about a program that he’s written and know right away whether he’s really a good programmer. If he’s really good, he’ll have everything at the tip of his tongue.

This really has made me thinking ... I've asked myself few questions you can ask yourself now:

  1. Have you ever met a single programmer that was below average, but (s)he managed to improve in a significant way in time? (it's not about pure productivity, but skill level: disregard laziness or motivational aspect)

  2. How much time do you need to evaluate particular programmer's current skill level?

  3. How much time do you need to evaluate particular programmer's potential (achievable in future) skill level?

Personally, I agree completely with BG. I'd even cut these 3-4 years to significantly lower period of time: 1-2 years at most. It doesn't mean that people do not improve / learn new tricks: sure they do, but a mediocre programmer will NEVER raise above the average level. Like @codinghorror has claimed in his post:

You've either got it or you don't.

Period.

BUT it doesn't meant that good programmer will remain at least good forever. Everything I've written earlier about necessity of keeping yourself up-to-date & continuous learning by doing still applies & there's no escape from that.

Why does it bother me?

Or rather - why did it struck me when I was reading? Mainly because I know a bit about organizing the career progression in various enterprises that do software development I vast majority of the ones I know doesn't realize what was written above at all ...

  • programmer's wage is strictly bound to his lenght of service (seniority)
  • so is his position
  • there's a common belief that if someone's doing something (like ASP.NET MVC programming) for 4 years, he has to be an expert, significantly better than that youngster who was exposed to that tech for only 1 year

Consequently, many companies ...

  1. ... deny young'uns - young, smart, talented programmers who know they value & ask for adequate compensation (which for those companies is not acceptable in case of "apprentices")

  2. ... hire young, green programmers with low wage expectations (& low self esteem) - because they expect to foster them on their own, thereby increasing their skill level

It's a trap!

Yes, it is & it will cost you a lot. It's 2014 already & whatever you're developing you don't really need hundreds of programmers to achieve anything: few smart people working together are able to create marvels within a few months time & there's no ratio to map good programmer's value adding potential to mediocre programmer's one:

  • it's not a mythical 10:1 (it's unrealistic of course)
  • and it's not 4:1, 3:1, 2:1, 1.5:1 either

Bad, mediocre, uninspired, not-giving-a-fuck programmers are fine as long as they are doing some mindless, repeatative, boring job during which they are led by hand - but do you really want to create IT products like that?! IT is everyone's business today:

  1. a way to differentiate yourself & your services
  2. a way to reach to your customers with your project
  3. a way to improve organization's agility & efficiency

Work done by bad / good programmer isn't just delivered in a different timeframe & with more or less bugs, they are far more differences which should be (& their consequences) quite obvious:

  • maintainability
  • extensibility - including, the cost of future modifications
  • tech debt control
  • coupling
  • testability
  • etc.

Another classic quote

"It’s too easy, as a team grows, to put up with a few B players, and they then attract a few more B players, and soon you will even have some C players. The Microsoft experience taught me that A players like to work only with other A players, which means you can’t indulge B players."

Steve Jobs

Does it really make sense to surround yourself with the herd of dummies that will never raise above some level or maybe it's better to go for the significantly smaller team of cowboys that want & can do something sterling?

I'll leave you with this for now.

Share this post