Black Magic Code

Tuesday, November 28, 2006

This is almost to good to be true....

I was over at reddit today and found a very interesting link. An interview with Bjarne Stroustrup the designer of C++. Titled: "The Problem with Programming". Now if you have a hard time figuring out my last rambling about the software industry. This is the article to read.

Bjarne states:

I think the real problem is that "we" (that is, we software developers) are in a permanent state of emergency, grasping at straws to get our work done. We perform many minor miracles through trial and error, excessive use of brute force, and lots and lots of testing, but--so often--it's not enough.

Software developers have become adept at the difficult art of building reasonably reliable systems out of unreliable parts. The snag is that often we do not know exactly how we did it: a system just "sort of evolved" into something minimally acceptable. Personally, I prefer to know when a system will work, and why it will.

And when asked about how to fix it.

In theory, the answer is simple: educate our software developers better, use more-appropriate design methods, and design for flexibility and for the long haul. Reward correct, solid, and safe systems. Punish sloppiness.
But he is not naive and continues with

In reality, that's impossible. People reward developers who deliver software that is cheap, buggy, and first.
Read the whole interview it is worth your time.

Wednesday, November 22, 2006

One thing wrong with the software industry.

This is a post I have wanted to write a long time. This will be an article about what is so depressing about the craft of creating computer software and hopefully I'll be writing a few more of these articles.

The thing is that I know the secrets of why so many software projects just fail. Well they aren't exactly secrets, they are more like lessons that many people in positions to make decisions about software development just don't learn. If they where secrets, I would be a billionaire by now with a streak of successful projects behind me.

Lesson one: Artificial deadlines, milestones, timelines or time related whatever that is way too short, have a habit of biting you in the ass in the end. This is the most costly mistake that many companies do. Examples of this practice is: "This can't possibly take more than a couple of hours!" or "This must be done in x months, or the company must close up shop!". If a developer or developer manager buys into this then the impending train-wreck is almost always is a fact.

The single most important reason why it is a train-wreck just waiting to happen is that: Developers under stress is almost always more prone to produce low quality code. It could be possible that the code that was produced under pressure actually works, sort of. Lets say that it produces the desirable result. Of course there are a few bugs, but those get fixed... no big wreck that I warned you of earlier. This is a possible outcome, also in my experience not too uncommon. So where is the train-wreck?

It is in the maintenance phase of your application. All of a sudden when you have stressed out code it lacks some things like uhm... design because there was no time, comments because the developer that was under stress thought "I'll fill in that later, after the project is shipped and done!" and the next crisis surfaced(it was those pesky bugreports that poured in!).

The psychology behind the keyboard that produces this is that with an artificial timeline/deadline it is treated like a 100 m dash instead of a marathon. This is the important thing, when you impose stress on a developer he will code like there is no tomorrow and the result has no tomorrow either. He expends all his energy on the very beginning of the race and just codes away without thinking of the rest of it. What happens if the bug that surfaces six months after delivery and it turns out that it is a huge design flaw? There is no simple way of "patching" it, it takes a lot of time and money. What happens if the feature request that come in after one month is crucial for your customer/customers and it doesn't fit in your design without rewriting significant portions of the code? Same result. You might lose the customer(s), because they lack patience and/or find a cheaper vendor that (can) provide what they want. That is expensive isn't it. That is the wreck in all its glory...

This was an entry inspired by parts of this article.

Monday, November 20, 2006

This looks like great fun....

I've been keeping my eye on Damn Vulnerable Linux for a while now. It looks like a terrific educational tool... who knows it might teach a trick or two for experienced people also.

Tuesday, November 14, 2006

kernel.randomize_va_space....

This is actually a quite a cool feature that I recently ran across on my Linux box at home. It is an Ubuntu 6.10 and I tried to run some old shellcode. For the life of me I didn't get it to work. In this was examples with specially crafted vulnerable software just to demonstrate. So I forgot about for a week or so, and one day last week I started to hunt down what was wrong. I found it... all never 2.6 linux kernels have the nasty habit of randomizing the address space of new processes. Why would you want to do that may you ask...

Because classic shellcode exploits rely on a known address space to execute its malicious payload. When you randomize your address space on the new processes means that your computer becomes more secure against some exploits.

Notice the some because it is not a panacea. It is not going to fix all your vulnerabilities. Also it is possible to turn it off if you by some odd chance experience problems. "sysctl -w kernel.randomize_va_space=0". And I suspect that there is going to be a number of System administrators out there just turn it of "because it could pose a problem", if they knew about it.