About Projects Driven By Users

I do believe that users are the most important part of a project. Really. They can provide you useful feedback, great ideas and suggestions on what can make your product better. Unfortunately the previous sentence only apply to few users.

In my experience, users are only interested to their own small needs. They don't care about your product but just to solve the problem they have today. More than once it happened to me that a customer asked me a change on a software to solve an urgent issue and few days later he wanted me to roll back the modification.

Having the development of your product driven only by users' requests is the fastest way to go crazy.

Feedback and hints are important but must be taken with a grain of salt. Don't abdicate the development of your product to users and customers.

A similar situation apply to tests. I've heard sometimes "The customer will do the tests." It may be OK if we are speaking about a small customization. But in all other cases, the result will be terrible. A user will test only the three or four functions that uses the most and, if something is not working properly, it's probable that he will find a quick workaround instead of spending half of an hour on the phone with your customer service. And even if they spend some time to report the issues to you, it's very likely that most of the messages would be like those reported in this page.

Again, users are not interested in your product. They only want a tool to solve their own problems and you should just thank them because your software has been elected. But don't ask them to do your job.

Code Review

During past few weeks, I've been reviewing an old codebase. Some functions were in place since 2008. You may think that those functions are bug-free. After seven years of usage, every issue should have emerged.

But the story is different. The number of errors I've found is impressive. Memory leaks, files left open, checks on conditions that can never be true (see also the last Horror Code), and, worst of all, logical errors. What do I mean with logical errors? Let me give you an example.

A Logical Error

There is a file descriptor declared as a global variable (OK, this could be considered a logical error too, but please keep reading) and a function that does some elaborations and then writes the result in a file descriptor. Among the parameters of this function there is a file descriptor... unfortunately it is never used. The writing is done on the global variable.

Everything works fine just because the global fd is the only one used in that program. What if in the future someone would have used that function passing a different fd? How long it would have take to find the bug?

By the way, the compiler has signaled with a warning that a parameter of the function was not used but nobody cared. You should always take care of warnings!

Conclusions

A code review is always a good thing to do. Probably you won't find big bugs but surely the stability and the quality of your software will be improved. And maybe that strange situation so difficult to reproduce will never be reported again.

Image by Randall Munroe licensed under a Creative Commons Attribution-NonCommercial 2.5 License.

The Day That Never Comes

The deadline is close. The customer is waiting for your fix. Your mate needs your patch before going home. No matter which of the above situations applies: the only way to accomplish your job is taking shortcuts and cutting corners.

You do not check some error conditions, use a fixed string instead of a localized one, do not properly free all allocated memory, etc. Your code compiles and seems to work fine but you know that it must be improved as soon as possible. So you tell your boss and/or the product manager. The response they usually give me is: "As soon as there will be some time, we'll fix it."

Guess what? That time never comes. There is always something more important or urgent to do, until a customer (usually an important one) reports an issue with the corners you have cut. Now the priority is to fix the problem as soon as possible, not to review the code to make sure it cannot happen again.

There is a logic in this: the customer doesn't care about code quality (even if he should). He just wants his software to work without errors. But for your company, it must be different. Why isn't it so?

Well, the answer I found is that for a customer is more important to have a quick solution than a bug-free software. It may seem pretty odd but just think about yourself. You buy a new smartphone and it just works as expected: you probably don't spam every social network to tell the world that your new iSomething is OK.

But I bet that if you find an issue and the customer service is really kind with you and the problem is solved in a couple of days, you'll tell your experience and suggest that brand to your friends.

This is called marketing and, on the past, there has been a PC producer that used to take advantage of this mechanism. But this is another story. At present, the only thing I can suggest you is to avoid shortcuts. At least unless you are in the marketing department.

Image by Nic McPhee licensed under Creative Commons Attribution-ShareAlike 2.0 Generic.

Code Will Tear Us Apart

There's nothing worst than read the code of someone you consider a good programmer and find tons of anti-patterns. Of course often there are good reasons behind some choices. Such as deadlines.

Jokes aside, I am conscious that my old code sucks too. This is because I continuously try to improve my knowledge and learn from my colleagues and from my mistakes. And also from my colleagues' mistakes.

Are We Writers?

Sometimes I read about parallels between novel writers and programmers (yes, I'm guilty too). It may work, but only on a superficial level. Because the code we write is not judged for the style. This is also why there are more "good" programmers than good writers.

From the customers' point of view, the only thing that matters is that the software does what he wants. But we know that this is quite impossible: bugs happen. In addition, new features are required.

From a developer's point of view, it's important that the code is understandable and easily extensible. The problem is that sometimes it's more convenient to rewrite a part of code instead of understanding and fixing it.

Just as an example, once in a file, I found a function called manage_parameters(). It was quite long, so I didn't analyzed in deep its code but it seemed correct. The next function in the file was manage_parameters_without_making_mess(). The developer that wrote the latter told me that he didn't have time to understand why the first function sometimes failed.

The Truth (?)

The truth is that we forgive (and often forget) our own mistakes, hiding behind poor excuses. But, at the same time, we are ready to point our fingers against other developers, especially if they are considered good programmers.

Bottom Line

If you think I've read your code and this post is about you, maybe you should spend some time to review what you have developed in the past.

Image by Miguel Angel licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic License.

Narcissus Vs Getting The Things Done

Narcissus-Caravaggio (1594-96)
"Narcissus by Caravaggio"
Do you know who Narcissus is? He is a character of ancient Greek mythology, so attracted from his own beauty to forget to eat only to look to his reflection in a river. And what about you, my dear developer? Are you like Narcissus? Do you spend hours in creating wonderful structures and classes, fancy functions and stunning algorithms? Don't you feel just like Narcissus?

Assuming you are a developer, for what purpose do you write code?

If you answered anything else from "solving problems", I'm afraid you are similar to Narcissus. This is what we are paid for: to solve someone else's problems. Possibly in the best way. But the amount of work should not be excessive.

I've been there before, I know what I'm saying. Once, when I was a (bad) C++ programmer, I've designed a wonderful class hierarchy to solve a trivial problem. The worst thing is that the function that uses those classes is probably used by 1% of the customers.

The general rule is: the effort must be proportional to the importance.

The most important thing is to complete a project. And right after it comes code readability. Beauty is on the bottom. Not because beauty is not important, but because it is not the purpose of your job. And you must be open to dirty your code when needed. But this is material for another post.

Please Optimize

Every now and then, I find quotes against optimization, just like this:
This is quite surprising, since in many cases the speed of a program is fundamental for its success. A UI (or a website) cannot be slow or become unresponsive in some situation. Managing a huge amount of data in few seconds instead of minutes can make the difference from a top seller to an unwanted app.

[A similar reasoning may applies to RAM or disk space too, but in this post I'll be more focused on the execution time.]

The only quote I totally agree with is
premature optimization is the root of all evil.
- Donald Knuth
The explanation is just few rows below.

(At this link you can download the paper)

[A good programmer] will be wise to look carefully at the critical code; but only after that code has been identified.

Identify The Critical Code

It's not always easy to understand where bottlenecks are. A developer with enough experience may imagine which part of the code needs to be optimized but:
  • he cannot be sure (scientifically speaking), and
  • he needs to have a measure of the improvements.
For this reason, you need to measure the length of (almost) every operation, paying attention to feed the application with a realistic set of data. Another good practice is to collect many samples for every dataset and calculate the average, in order to remove the noise produced by other processes running on the same environment.

After having analyzed the results, you can start to make changes on the parts that last longer, possibly one at time. And then, measure again. Was your modification faster? Good job! Go on with another part. Was it slower? Try another solution.

Now you may want to know which tool to use to take the measurements. There are many performance analyzers out there, but I prefer to collect timestamps from the right places.

There are three reasons behind this choice:
  1. I have to review the code and this is important because I'll have the structure in mind when I start to make changes;

  2. some profilers are not very accurate (for example, they return an estimation about which functions take the most execution time, but cannot tell you if this is because they have been called a million times);

  3. I have a great control over the measured code so, once identified a slow function, I can set more timestamps.

How Much Should I Optimize?

Even if it seems a silly question, there are many different levels of optimization. But the most important thing to consider is that the compiler usually has its own strategies to compile and optimize our code. For this reason, what seems a great improvement, once compiled may not lead to any difference. This is why it's a good idea to compile turning on optimizations.

In addition, please consider code readability, otherwise there is the risk that in the future another developer will get rid of your efforts just because it's too hard to understand. If this kind of optimization is really needed, use comments to explain it why you wrote such an obscure code.

Believe it or not, once this happened to me too: there was a really complicated block of code (with no comments explaining it) that I substituted with few lines of code, just to roll back the change once seen the execution time.

Horror Code - Loop And Re-loop

Some time ago, a colleague of mine told me to look at a function. It was something similar to this:
void foo(struct bar array[], unsigned int count)
{
        /* some initialization code */

        for (int i = 0; i < count; i++) {
                /* 30 rows of code
                   doing something with array[i]*/
        }

        for (int i = 0; i < count; i++) {
                /* other 20 rows of code
                   doing something with array[i]*/
        }

        /* some cleanup code */
}
At first, I thought that, in the first loop, some data needed by the second loop were calculated. But after a closer look, I found that this was not the case. Furthermore, I saw that the first five or six rows of both loops were the same.

The explanation is that the second loop has been added years later the first by a different developer that didn't want to waste time in understanding what the first loop did. You may not like it, but it works, unless you have performance issues. Personally, I think there are funnier ways to make two loops in a row.

Corkscrew (Cedar Point) 01

Don't Wait For Bad Things To Happen

Things only get as bad as you are willing to let them.
This thing has happened to me so many times that I start to think bad luck is real. The situation is the following: a product is on the market since several years and everything works fine. At some point, a customer reports a strange behavior. You start to look at the problem and found an horrible bug that is there since the beginning. In the time you think about a solution, implement and test it, at least two other customers report the same issue.

How is it possible? How is possible that for years everything worked fine and in one week three different people find the same bug? The only answer I have is...

The Murphy's Law

There are several versions for it but I believe that it can be summarized in this way:
If anything can go wrong, it'll go in the worst possible way.
This may seem pessimistic but knowing that every bug can be potentially catastrophic, can help us to be more focused and more critical about our code. What I've seen frequently is cutting corners to meet deadlines (yes, Your Worship, I'm guilty too) with the promise (to whom?) of doing the right thing in the future. But usually that future will come when it's too late and a customer has already found the problem.

The only way I know to prevent this kind of issues is to plan periodical revisions of the code that can lead to refactoring sessions. Another idea may be to have a checklist of things to verify before put your program in production. For C programs it may be something like this:
  • no strcpy() allowed - use strncpy()
  • no sprintf() allowed - use snprintf()
  • check for NULL pointers
  • check for memory leaks
  • ...
So now you are ready to revise all the code of your team to improve it, right? No!

If It Ain't Broke, Don't Fix It!

This is an old adage that is difficult to deny. So, what's the right balance? I've seen performance optimizations made by removing vital checks. I've seen commit messages claiming "removed useless code" made by developers that didn't understand why that code was there.

Well, to me, it all depends on your experience and your knowledge of the code you are gonna change. You are allowed... nay you must improve the code, but you must also know what you are doing. And this is the most important thing!

By the way, if you are in doubt, ask someone more experienced than you.

Check For Memory Leaks!

Last week I've lost at least three hours in understanding and fixing a small open source library that was leaking memory. The incredible thing was the amount of allocated memory (half of whom never freed). Basically, the library is a overcomplicated implementation of a binary tree in C that, for less that 1 KB of data, leaks 8 KB of RAM.

My first intention was to throw away that piece of junk code, but unfortunately I didn't have the time to rewrite it, so I started hunting. But understanding the mass of functions and when they are called was taking too long, so I decided to call my old friend Valgrind.

Valgrind is an excellent tool for detecting memory leaks. The simplest way to use it is the following:
valgrind --leak-check=yes program_to_test [parameters]
This is enough to provide you the total amount of allocated memory with a list of blocks that have not been freed (if present). And, for everyone of these, there is the full call hierarchy to let you quickly identify why it was allocated.

Of course, Valgrind can do much more than this but its usage to find memory leaks is the minimum thing that every developer must do before releasing a software. And the fact that the code is open source is not an excuse: you must ensure the quality of your program, no matter how many people will read the source code.

Versions Madness

Last week, Linus Torvalds, the creator of Linux, published this post on Google+.

So, I made noises some time ago about how I don't want another 2.6.39 where the numbers are big enough that you can't really distinguish them.

We're slowly getting up there again, with 3.20 being imminent, and I'm once more close to running out of fingers and toes.

I was making noises about just moving to 4.0 some time ago. But let's see what people think.

So - continue with v3.20, because bigger numbers are sexy, or just move to v4.0 and reset the numbers to something smaller?
It seems that Linus considers the version number just like a name, unrelated to commercial consideration and even to product features. But often, the choice of the version number is considered a science.

I Like It Complicated

The major-dot-minor format is quite common and also the meaning of those numbers are quite standard: minor changes when there are small improvements while major increases on bigger changes. But after those numbers there may be a wide variety of things:
  • a build number, automatically increased at every successful compilation,
  • a distribution number or letter, changed every time a build is delivered to testers or customers,
  • a letter indicating the build type (alpha, beta, final, etc.),
  • abbreviations for special releases (pre, RC, QA, ...)
The funny thing is that some of the above cases may be combined together so, for example, you can find 1.7d RC or 2.1.B.174. By the way, for some years I've used a four-number system to identify delivered versions of my software: after major and minor there was a counter to keep track of small functions changes or refactorings while the last number was related to bug fixed.

The Tech Side

Your software may expose API or use functions provided by other programs. In this case, the version number has a fundamental purpose. Is through this string that your application is related to the others.

Understanding how other developers deal with version numbers can help you to know with which releases of third party softwares your program is compatible. And save you from some serious headaches when a customer claims that nothing is working.

The Commercial Point Of View

Aside these technical considerations, there is also the commercial side of version numbers. To a user or a customer, a change in the major number means that big changes and big improvements have been made. This generates greed for the new version in some people that can be used by salesmen to raise prices. The real important thing in this situation is to meet the expectations of the customer.

Even if the software is free, there is a rise of expectations whenever the change involves the leftmost numbers. And also in this case you cannot disappoint the users. However, for the Linux kernel, the situation is quite different: it's not directly used by final users but only by other developers and system administrators. In this case, Linus' idea is not so bad, in my opinion.

Conclusions

The real important thing is to use a system for indicating the version of your software. It has to be meaningful to you and to your organization and it must be clear enough for the final users. If you want my opinion, Semantic Versioning is pretty good.

When You Must Write Unreadable Code

Punch card from a typical Fortran program.
Code was quite unreadable in the old days
Well, if you know me or read this blog since some time, you should know that I consider code readability even more important than correctness. This is because a bug-free code does not exist, thus, soon or later someone will have to fix it in the smallest time possible.

Nevertheless there are at least two situations where is needed that your code is difficult to understand.

You Want To Be Indispensable

It may be because you are a contractor and you want that for the company is easier to ask changes to you instead of try to manage them internally. Or, if you are an employee, you may be afraid that someone else can take your position, and the company can decide to fire you.

No matter the reason, writing unreadable and poorly commented programs is a good way to make the understanding of your code really hard and time consuming for everyone, except you. With these premises, the company best choice is to don't give your code to anyone else.

You Hate Your Colleagues

This is a sort of revenge. Do your mates have a better salary than you, bombastic titles in their business cards, and the boss always praises them? The only way you can punish them is by forcing them to understand your terrible code.

This can be done in a large scale by including a refactoring session each time you add a new feature or fix a bug in an understandable file. Of course, your main goal is to mess things.

They have to feel the pain each time they are requested to change something you touched.

Drawbacks

There aren't many, just a couple. First: your colleagues may hate you and consider you a bad programmer. If you are writing unreadable code just to annoy them, this should not bother you too much.

Second: the code will be difficult to understand for you too. So, after a couple of months it will be painful for you too to manage you own code. If you are an hourly contractor, this can be a good thing since every change will take longer.

Conclusions

I personally don't see any other reason to write poorly readable code. And also the above two are quite questionable. Always remember that with a good VCS is easy to detect who introduced the mess in the code and, at some point, someone may decide that is better to restart a project from scratch instead of having only one developer able to manage it.

RTFMC

RTFM by xkcd
If you don't know, the acronym RTFM means "Read The Friendly Manual". And this is exactly what I've done several months ago, when I've used a third party library. I've made a simple test program and everything seemed to work just fine.

This week, I've used that library for work and I've started to see some strange behaviors in my app. The output of the old test program was still correct, but the same code in a bigger application caused wrong values being shown and some crashes.

It took me hours to understand where the problem was. And, can you guess?, I found it only after reading the friendly manual carefully. It was clearly written that some returned data were references to members of a structure.

What I was trying to do was to access them after the structure was freed. The simple test program seemed to work fine just because it was too short, so the memory just freed was not overwritten by anything else.

Lesson learned: it's not enough to read the manual; you have to read it carefully. Or, if you prefer an acronym: RTFMC.


Image from xkcd licensed under a Creative Commons Attribution-NonCommercial 2.5 License.

How To Recover Deleted Git Commits

In many Git tutorials it's written "never use git reset --hard". And there is a good reason: this command deletes commits, and, if you don't have pushed to a remote repository, your changes are lost (if you don't know where to find them).

A Little Story

This happened to me some year ago, when I was a Git newbie. There was a bug on a software so I created a new local branch starting from the master and started to fix it. In the meantime, a colleague of mine asked me for a quick workaround to continue his work. So I switched back to the master and added a couple of temporary commits.

After a week, the state of the repository was this:


The bug was fixed, the temporary commits could be removed and the branch merged to the master. Easy to say, easy to do, easy to mess.

My idea was to move to the master, delete the temporary commits and then merge the fix branch. Unfortunately when I run...
git reset --hard HEAD^^
...I was on the wrong branch. The good commits were gone. Panic!

Where Have They Gone?

What I've learned from this experience is that deleted commits are still there, at least until you run git gc or git prune. The problem is to find a way to bring them back. What I did at the time was to use grep to search for the commit message under the directory .git of the repository.

In this way I've discovered that in the directory .git/logs/refs/<branch-name>, in the logs are also recorded the hashes for every commit. With hashes it has been easy to checkout the second commit (going in a 'detached HEAD' state) and verify that nothing was missing.

At that point, I've created a new branch (with git checkout -b new_fix) and carefully executed the original plan, this time without surprises.
I love it when a plan comes together!
- John "Hannibal" Smith

The 80-20 Rule: Pareto And The Devil

Probably only the number of webpages with images of cats is greater than those talking about Pareto principle. Nevertheless I want to add my own just because I think this is not so clear to all my colleagues.

A rough definition of this law may be:
To accomplish the final 20% of a job, you'll need the 80% of the total time.
If you are close to the deadline of a big project, this sentence sounds quite depressing, isn't it?


But if you have some experience, you can say that this principle is absolutely correct. How much time have you spent in moving UI objects one pixel at time until the Project Manager is satisfied? And what about colors? And that damn string that doesn't fit the printed page? And the final comma to be removed from JSON arrays? And that strange bug that only happens on few PCs?

All these things are important - but not fundamental. They don't represent the core of the application, just some details. In fact, another way to express nearly the same concept is:
The Devil hides in the details.
In my opinion, it is all in the difference between a proof of concept and a real application. A software, in order to be given to a customer, must be:
  • efficacious - it must do its job
  • efficient - the job must be done in the best way possible (according to time constraints and remembering that perfection is impossible to achieve)
  • reliable - it must handle failures in a proper way and always preserve user's data
  • usable - the user must find it natural to use (*)
People that doesn't have experience in coding "real" applications usually underestimate the benefits of usability, reliability and sometimes efficiency. Without these characteristics, your software will be nothing more than a proof of concept. And you are not creating proof of concept, aren't you?


(*) = I know this is not the most complete definition of usability, but I think that it gives an idea about what should be the final intent of any user interface.

Insanity And 4 Other Bad Things

Dilbert by Scott Adams
The definition of insanity is doing the same thing over and over and expecting different results.
Some say this sentence has been first pronounced by Benjamin Franklin, others attribute it to Mark Twain or Albert Einstein. They all are wrong. But the quality of the people to whom this quote is ascribed should tell you something about its correctness.

There is also an ancient Latin maxim (by Seneca) that states a similar concept:
Errare humanum est, perseverare autem diabolicum et tertia non datur.

To err is human; to persist [in committing such errors] is of the devil, and the third possibility is not given.

[Thanks to Wikipedia]
With this premises I have to conclude that the Devil is causing so much insanity in the world nowadays. Take this as a general discourse but it seems to me that many people keep doing the same things in the same old way, facing every time the same problems and delays without understanding that things can go really better just changing few things in their way of acting.

Excluding supernatural interventions, in my experience, this kind of behavior is mainly due to four reasons.

1. (Bad) Laziness

Not that kind that makes you find the fastest solution to solve a problem. This laziness is absolutely harmful; it's the concept of comfort zone amplified to the maximum. "I don't wanna change!" and "I don't wanna learn anything new!" are his/her mantra.

Every change in procedures is considered a total waste of time and a new developing environment is simply useless. If you have a couple of people of this kind in your team, you can be sure that every innovation will be hampered.

To overcome this behavior you can try to propose a total revolution in order to obtain a small change.

2. Arrogance

"I'm sure I've made the right choice!" no matter if this decision has been made years ago and now the world has changed. By the way, the initial choice may have been wrong from the beginning but nothing can make him/her change his/her mind. Probably this has something to do with self-esteem.

It's quite impossible to work together with this kind of developers, since they will never admit their faults and they'll try to put the blame on others.

Sometimes a good strategy may be to suggest things as they have been proposed by the arrogant himself.

3. Ignorance

There's nothing bad in not knowing something. The problem is when he/she doesn't care about his/her nescience (see point 1), when he/she doesn't want to admit it (see point 2) or when he/she doesn't trust others' suggestions.

This last point may seem a little strange: if I don't know something, I have to trust on someone that is more informed or skilled than me, right? Unfortunately it doesn't work this way. If you need a demonstration, search "chemtrails" on Google.

I don't have a suggestion on how to minimize the impact of these guys in your team. Maybe a training can be useful but the risk is that they don't trust the teacher.

4. Indifference

This is the worst, especially if referred to a manager. He/she doesn't care about the feeling of his/her subordinates. "There is no need they should be happy doing their job" and "It's not a problem if they spend more time than what's needed in trivial tasks that can be automatized" are his/her thoughts when someone is complaining.

I don't know if there is some sadism in this behavior, but it's quite frustrating. And it's very bad for the team and for the whole Company.

Conclusions

During my life, I've had the "opportunity" to work with people belonging to one or more of the above categories and I can assure that the last is the worst. You simply cannot team up with someone that doesn't care about you.

Suggested complementary read: Is Better Possible? by Seth Godin.

You Are Not A Programmer


So you write code every day, maybe in a nerdy language like C or even in assembly. And a company is paying you for this job. When someone asks you "what do you do?", it's normal for you to reply "I'm a programmer", isn't it?

Well, let's see if you are a liar. This is a simple yes/no questionnaire about what you have done in the last two years.

The Real Programmer Test

  1. Have you studied a new programming language?

  2. Have you used a new technology?

  3. Have you spent some time to optimize your code?

  4. Have you programmed for your pleasure out of the working hours?

  5. Have you eaten at least 50 pizzas?

  6. Have you drunk at least 3 coffees every day?

  7. At least once did you choose to not use your favorite programming language because you thought it was not the best choice for a project?

  8. Have there been more happy days than sad days when doing your job?

If you replied "yes" at more than half of the above questions, congratulations, you are a real programmer!

Explanation of the Test

If you are not a real programmer, maybe you cannot understand how the above questions come from, so here there are some hints.

  • A programmer is curious by nature: he likes to learn new languages and technologies, even if they are not required by his job (questions 1 and 2).

  • A programmer knows that every code needs some refactoring at some point (question 3).

  • A programmer is happy when he can write code (questions 4 and 8).

  • A programmer is realistic: he knows that one-size-fits-all doesn't exists in computer science; in other words, for some purposes a language/technology can be better than another (question 7).

  • A programmer needs to have it's brain constantly fed by carbohydrates (pizza) and sometimes powered by caffeine (questions 5 and 6).

Having said that, you may argue that many of these characteristics are innate. Well, you are right! Many people write code because they think it's just like any other job but they are wrong. Programming needs passion, devotion and the right way of thinking. And over all (as I've read in a pizzeria):

If it were an easy job, everyone would be able to do it

Image by icudeabeach

Reliability First - Applications

What does reliability mean in computer science? Speaking about an application, how can we say it is reliable? I don't know if there is a shared opinion but mine has maturated after a scary situation.

Some years ago, on my previous workplace, we created a huge file with a very powerful and even more expensive third party software. But some seconds after having pressed the save button, the software crashed. Panic. We searched for the saved file and we found it. Don't panic. So we restarted the powerful-and-expensive-third-party-software to reopen the file but it failed. We tried several times even on other PCs without success. The (binary and proprietary) file seemed to be corrupted. Okay, panic!


Fortunately we also owned a licence of a similar software, much less powerful and much cheaper (about twenty times cheaper). We had nothing to lose so we tried to open the file with this cheap software and... it worked! All our job was there. So we saved the file with a different name in the cheap software and eventually we were able to open it with the expensive software.

After that incident I have a clear idea of what reliability means when speaking about applications. And you?

Image created with GIFYouTube. Scene taken from movie "Airplane II: The Sequel".

Write and Rewrite (and Make it Better)


I'm not comparing myself to Hemingway, but, when I write a new piece of software, for me it works the same. I usually write code in a quick-and-dirty way just to make things work. I have to follow my stream of consciousness and put down the basis of the algorithm.

Then, when something starts to work, I begin to make it look better. This means that I change variables like goofy, pluto, etc. with more descriptive and meaningful names. I try to see if I can move some piece of code into a function or if there is a more performing algorithm to be used. In the end, I look at errors, particular situations and memory leaks.

The final result is pretty different from the original code, but it's surely (?) better written, more readable and more performing. Maybe you can argue that I could write directly a better version and limit the modifications to small cosmetic things, so I would have saved time.

My answer is that it's not so easy. For example, managing error codes and particular cases or finding meaningful names for variables and functions is something that takes some time and, if the stream stops, it probably will take longer to accomplish the work.

What about you? Do you ever follow your stream of consciousness? Or do you prefer to immediately take care of all the details?


Image create with Pinwords - Picture of Ernest Hemingway taken from here (public domain)

Ideas Are Not Enough

Image by Pictofigo
Today, +Seth Godin in his daily post spoke about something that is essential for me: the importance of going from ideas to implementation.

A great architect isn't one who draws good plans. A great architect gets great buildings built.

You don't know how many times I've heard people (including myself) complaining about a new product presented by a competitor saying "I've had the same idea years ago".

So are you trying to say you are smarter? Probably not so smart to get things done. Or maybe did you think that your idea would be translated into a project by your subordinates?

Stop fooling yourself! If you want your dreams come true, stop sleeping and start working!

I've already wrote about this concept in Avoid Perfection and briefly in Everyone Matters.

Cleaning Up the Path in 5 Easy Moves

Freddie Mercury - "I Want to Break Free" music video
Freddie cleaning his house
The idea for this post came during last weekend while I was cleaning my house. In fact, this activity is made by several parts: some are funny (like using the vacuum cleaner while singing "I Want to Break Free"), some other are boring, others are awful.

The development of a big project is very similar: there is the challenging part, the damn-long part and the stupid part. Here there are some advice to help you to accomplish your job in the best way.

1. Split the project in tasks and subtasks - this is obviously the first thing to do; start developing headlong is something to avoid.

2. Try to see if there are constraints - it's important to understand which tasks must be made before others and define a clear path between them.

3. Start with the task you consider the worst - it may be the longest or the most boring or the most annoying, the choice is up to you, but when it will be done, the rest of the project it's all downhill.

4. Work on a single task at a time - you have a road map (defined at point 2), why do you need to get rid of it?

5. Work always on tasks related to those already accomplished - in this way, you are always sure that new pieces fits properly in the existing structure and it's easier to test your progresses.

Well, that's all folks. Let me just add another general purpose suggestion: always remember the 80/20 rule, that can be declined as "details will cost you the majority of your effort".