The three approaches to computer security
1) Security by Correctness
2) Security by Isolation
3) Security by Obscurity
Let's discuss those categories in more detail below.
Security by Correctness
The assumption here is obvious: if we can produce software that doesn't have bugs (nor any maliciously behaving code), then we don't have security problems at all. The only problem is that we don't have any tools to make sure that a given code is correct (in terms of implementation, design and ethical behavior). But if we look at various efforts in computer science, we will notice a lot of effort has been made to achieve Security by Correctness: "safe" languages, code verifiers (although not sound ones, just heuristic based), developer's education, manual code audit, etc. Microsoft's famed Secure Development Life-cycle is all about Security by Correctness. The only problem is: all those approaches sometimes work and sometimes do not, sometimes they miss some bug and also there are problems that I simple don't believe can be addresses by automatic code verifiers or even safe languages, like e.g. logic/design bugs or deciding on wheatear a given code behaves maliciously or not (after all this is an ethical problem in many cases, not a computer science problem).
To sum it: I think that in some more or less distant future (some people think abuout a timeframe of 50 years or so), we would get rid of all the implementation bugs, thanks to safe languages and/or sound code verifiers. But I don't believe we could assure correctness of software on any higher level of abstraction then implementation level.
Security by Isolation
Because of the problems with effectively implementing Security by Correctness approach, people, from the very beginning, has also taken another approach, which is based on isolation. The idea is to split a computer system into smaller pieces and make sure that each piece is separated from the other ones, so that if it gets compromised/malfunctions, then it cannot affect the other entities in the system. Early UNIX's user accounts and separate process address spaces, things that are now present in every modern OS, are examples of Security by Isolation.
Simple as it sound, in practice the isolation approach turned out to be very tricky to implement. One problem is how to partition the system into meaningful pieces and how to set permissions for each piece. The other problem is implementation - e.g. if we take a contemporary consumer OS, like Vista, Linux or Mac OSX, all of them have monolithic kernels, meaning that a simple bug in any of the kernel components (think: hundreds of 3rd party drivers running there), allows to bypass of the isolation mechanisms provided by the kernel to the rest of the system (process separation, ACLs, etc).
Obviously the problem is because the kernels are monolithic. Why not implement Security by Isolation on a kernel level then? Well, I would personally love that approach, but the industry simply took another course and decided that monolithic kernels are better then micro-kernels, because it's easier to write the code for them and (arguably) they offer better performance.
Many believe, including myself, that this landscape can be changed by the virtualization technology. Thin bare-metal hypervisor, like e.g. Xen, can act like a micro kernel and enforce isolation between other components in the system - e.g. we can move drivers into a separate domain and isolate them from the rest of the system. But again there are challenges here on both the design- as well as the implementation-level. For example, we should not put all the drivers into the same domain, as this would provide little improvement in security. Also, how to make sure that the hypervisor itself is not buggy?
Security by Obscurity (or Security by Randomization)
Finally we have the Security by Obscurity approach that is based on the assumption that we cannot get rid of all the bugs (like in Security by Isolation approach), but at least we can make exploitation of those bugs very hard. So, it's all about making our system unfriendly to the attacker.
Examples of this approach include Address Space Layout Randomization (ASLR, present in all newer OSes, like Linux, Vista, OSX), StackGuard-like protections (again used by most contemporary OSes), pointer encryption (Windows and Linux) and probably some other mechanisms that I can't remember at the moment. Probably the most extreme example of Security by Obscurity would be to use a compiler that generates heavily obfuscated binaries from the source code and creates a unique (on a binary level) instances of the same system. Alex did his PhD on this topic and his an expert on compilers and obfuscators.
The obvious disadvantage of this approach is that it doesn't prevent the bugs from being exploited - it only make the meaningful exploitation very hard or even impossible. But if one is concerned also about e.g. DoS attacks, then Security by Obscurity will not prevent them in most cases. The other problem with obfuscating the code is the performance (compiler cannot optimize the code for speed) and maintenance (if we got a crash dump on an "obfuscated" Windows box, we couldn't count on help from the technical support). Finally there is a problem of proving that the whole scheme is correct and that our obfuscator (or e.g. ASLR engine) doesn't introduce bugs to the generated code and that we will not get random crashes later (that we would be most likely unable to debug, as the code will be obfuscated).
I wonder if the above categorization is complete and if I haven't forgotten about something. If you know an example of a security approach that doesn't fit here (besides blacklisiting), please let me know!