A Computer Security course: Introduction

Copyright © 2000 - 2002, Giovanni Chiola.

This file was last updated on Oct. 15th, 2002.



Introduction





Computer Systems and their properties

Computer systems are usually adopted to implement parts of procedures or activities needed by an organization to accomplish its mission. Some of these procedures may be critical for the organization, hence the computer systems that implement them must guarantee their correct implementation. Relying on computer systems usually involves a change of structure and organization compared to the manual implementation of similar critical procedures, so that it might be difficult or even impossible to "switch back to manual" if/when the automated procedures fail. The study of the properties of such mission critical systems and procedures is thus mandatory to anticipate possible weaknesses and prepare for the adoption of appropriate counter-measures.

Resources and Users

Computer systems offer several kinds of resources such as storage capacity, computational power, software tools, and so on. Such resources are usually accessed and exploited by people, that can be classified in several categories of users, such as managers, employees, customers, general public, or malicious attackers.

In this framework any person that has (authorized or not) access to some of the resources of the system is called a "user." In order for a system to provide users with secure access policies to the resources, the system must be able to identify users (individually, or at least as belonging to various predefined categories).

A computer system may be further distinguished in terms of its degree of flexibility in handling various resources and dealing with many different users. A system is classified as open in terms of resources and/or users populations if it allows dynamic addition or removal of resources and/or users during normal operation. Conversely, a system is classified as closed in terms of resources and/or users populations if it requires an off-line re-configuration to allow changes in the populations of resources and/or users. As it might be expected, providing security in a closed system is easier than in an open system environment. However the flexibility of an open system may be a stronger requirement than security in several application domains, thus forcing designers to tackle security problems in open systems as well.

Authentication of users as well as resources (that is making sure that someone or something is precisely the person or resource it claims to be) is one of the fundamental problems to solve in a secure open system. The main technique adopted consists in challenging the entity to be authenticated with a question that only that entity is supposed to be able to answer correctly. The application of this technique requires the ability of the user (and sometimes also of the system) to keep information secret (the correct answer to the challenge). In this sense we may claim that security cannot be achieved without the ability of each legitimate user to keep their authentication information secret.

Sometimes the system itself may provide privacy features to help legitimate users keep their authentication secrets. In any case a system alone cannot provide security features without relying on the faithful cooperation of legitimate users, and vice-versa, a user cannot completely rely on the security of a system without trusting at least some crucial components of the system it is using (the so called security kernel).

Complexity, Decomposition, and Properties

Today's computer systems are characterized by an extremely high degree of complexity. In order to be able to design, build, or even simply use a computer system one needs to take a great deal of abstraction and reason in terms of extremely simplified models. Engineering has developed over the time several powerful techniques and methodologies to deal with the complexity of systems, the main ones being a set of different decomposition techniques. Divide et impera is the motto that yielded to the design and implementation of such (otherwise unmanageably) complex systems.

Very central to the heart of Computer Science and Engineering is the concept of Virtual Machine. Generally speaking, decomposition is more or less equivalent to the idea of building a more complex virtual machine using one or more (simpler) virtual machines. There are several ways of approaching the problem.

All these decompositions are simultaneously applied to allow designers and implementors to accomplish their job and understanding in detail some small fraction of a complex system, while retaining only a very rough and imprecise idea of the rest of the computer system which their small piece will have to interact with. Usually the most difficult part of a design is the correct identification and specification of the interfaces among the various modules. The design of a single module is usually accomplished in isolation, taking the specification of the interface as a high level model representing the interaction with the rest of the system.

In order to meet their design goals, however, designers of a single module at a given level of abstraction must face not only the basic question of whether or not the module implementation satisfies the requirements posed by its interface specification. After the system is built and assembled, the problem is to check whether the whole system satisfies its global requirements. One possible cause of malfunction even if all individual components meet their individual requirements is poor or erroneous design and specification of the overall architecture and of the interfaces. Usually the kind of guarantee that is provided by a successful implementation of a module is a conditional one: if the rest of the system respects the specifications of the interface, then this module also respects the requirements of the interface. But what happens if the interaction among two or more modules brings the system to an unforeseen state in which some of the interface assumptions are violated?

Some properties of computer systems, at least under certain conditions, are compositional, in the sense that if their are satisfied by all components, and if the interfaces are appropriately defined, then their are satisfied by the whole system as well. Unfortunately, several interesting properties of computer systems are non compositional, that is they depend not only on the individual properties of components but also on their actual interaction within the complete system.

Examples of non compositional properties are performance, reliability, and security. Their study requires thus a careful understanding and analysis of the complete system as well as of the individual components. This is why studying performance, reliability, and security of complex computer systems is so difficult compared to studying the properties of the individual components.

Availability, Integrity, Privacy, Reliability

Speaking of Computer Security to the man in the street would probably lead to considerations about spies and teen-age hackers breaking into computer systems of intelligence agencies to grab confidential data. In other words, security is almost always intended as a synonym for privacy or secrecy.

Indeed in several cases privacy is one of the major security concerns, but certainly not the only one. Therefore we shall review the different characteristics that a secure system is supposed to have.

Availability
The system must be ready to perform its functions and/or supply the data it contains whenever requested by legitimate users, without any unusual delay. Any legitimate user must perceive the system as continuosly operational at its "cruise speed". Percentage of normal operational time over total time is usually taken as a numerical measure of how well a real system subject to discontinuity in service approximates the ideal case of a system that is always available. For example, a system operating 12 hours a day could be graded as 50% available during the week. High availability (24 hours per day, 7 days per week without interruption no matter what the environmental conditions are) is currently achievable only using extremely costly hardware, software, and operation organization.
Integrity
The system must keep its data and procedure unchanged, so that legitimate users can always rely on the correctness and completeness of all data and procedures supplied by the system. No unauthorized user of the system must be able to add, delete, or change data and procedures without its sabotage attempt being at least noticed if not avoided. In case the system is not able to prevent unauthorized changes of data, it must at least be able to distinguish original from fake data, and deliver only original data to legitimate users. Changed data in this case would become unavailable, thus affecting availability and/or reliability rather than integrity.
Privacy
Only information classified as "public" must be accessible by all users without restrictions. Information classified as "confidential" must be accessible only to legitimate, authorized users. Various classification schemes have been defined, such as discretional and mandatory. In any case the system must be able to implement the chosen classification and privacy policy correctly and completely. Under no circumstance a non authorized user must be able to access classified information.
Reliability
Users can rely on the ability of the system to eventually perform its function and supply information that was stored in it. Differently from availability, this does not require the system to be continuosly operational all the time. However, even after a period of reduced function or of complete inactivity the system is expected to resume its normal function and retrieve all data. Reliability can be achieved even in presence of system faults by including repair capabilities in the system functions. A Typical example of implementation of reliable storage is taking backups, and restoring data in case of disk faults.
A combination of all of the above characteristics is required for a system to be considered "secure," with emphasys on one aspect or the other depending on the particular application the system is devoted to. Availability, integrity, and reliability may be primary concerns in building a server for the dissemination of official information such as law and regulations issued by a national authority, with minor requirements in terms of privacy. On the other hand, reliability integrity and privacy may be the major concerns in building a database for sensitive business data used for medium term strategy planning, with minor requirements in terms of availability.

Fault-tolerance, Robustness, and Security

Correctness is of course a major requirement for any secure computer systems. The presence of errors in design and implementation of system components could be easily exploited by unauthorized users to decrease the levels of availability, reliability, integrity and/or privacy offered by the system.

Correctness alone, however, is not usually considered sufficient as a requirement for a system to be secure. In an ideal world all correct components of a complex system would always function according to their specification. In the real world, however, system components may fail after some time of normal and correct operation, thus starting to exhibit erroneous behavior that does not conform to their design specifications.

Components of a secure system may fail for various reasons: spontaneously, due to normal aging of physical components and materials; spontaneously, due to inferior quality of their design or material; as a consequence of adverse environment conditions, hazards, or deliberate attacks from intruders. The idea of security is never completely hortogonal to the idea of physical harnessment and protection from adverse environment conditions.

In any case a secure system is always expected to be able to face some kind of attack from unauthorized users. Each component must contribute to the security of the system even in case of failures (in the component itself or in other components). If a component fails, it should at least fail "in the good way" so as to keep the overall security of the system to the largest possible extent.

A system is called fully fault-tolerant with respect to some property and the failure of some component if it maintains the property even in case of failure of those components. For instance, a system might tolerate the failure of a disk without compromising the privacy of the information stored in the system. The same system might tolerate the failure of the disk without comprimising the reliability provided that a backup copy was taken on some other support. However the system would not tolerate the disk failure in terms of availability, unless the information was replicated on some other, still functioning disk that could immediately replace the failed one.

Let us now consider the characteristics of a single component with respect to internal and external failures. A (hardware or software) module is said to be fail safe with respect to a given security property whenever an internal functional failure does not prevent the component from keeping its property. A disk that ceases to release data stored in it is fail safe with respect to privacy (it cannot release data to unauthorized users), while of course it is not fail safe with respect to availability and reliability; it may or may not be fail safe with respect to integrity depending on the ability of other devices to notice its failure as a different behavior from providing the (constant or random) output that could be read anyways on its output interface.

A (hardware or software) module is said to be robust with respect to a given security property whenever an external functional failure (in another module) does not cause the component to fail to keep the property. Robustness with respect to security properties is one of the hardest features to implement. Usually robusteness is obtained by adoption of more than one independent mechanism to guarantee the considered property, so that if one mechanism fails the remaining ones are expected to continue to guarantee the property. However only systems made of robust and fail safe components may actually be able to stand against security attacks in hostile environments.



Malicious Attacks

Historically, attacks to the security of computer systems have been conducted for many different purposes and adopting a great variety of techniques. It is therefore futile to believe that a single "security solution" might be adequate to stand such a variety of security attacks. A practical way of confronting security attacks is first of all to study and group them in various classes according to the amount of possible damage they could cause and the likelyhood the system will have to be really standing them, depending on the environment in which the system is operated.

Worms, Viruses, and Troyan Horses

The so-called Internet Worm developed by Robert T. Morris in 1988 is probably the first historical example showing the potential vulnerability of unprotected, networked systems to external attacks. Before this event, the Internet was designed and used as a communication tool oriented to a friendly and collaborative environment (see, e.g., J.F. Shoch and J.A. Hupp "The Worm Programs - Early Experience with a Distributed Computation," Communications of the ACM, vol.25, no.3, pp. 172-180, March 1982). Hence nobody was designing protocols with the aim of preventing such an event. Nobody was seriously considering the possibility that someone connected to the Internet could launch an application with the purpose of taking control of remote machines against the interest of their legitimate users. All protocols were designed with the main goal of obtaining high efficiency and flexibility, and authentication was basically reduced to the mutual trust of the various system managers.

It was only after the proof of vulnerability provided by the Worm that network managers started to appreciate the importance of security aspects. However, at that stage of the Internet development it was not practically possible to stop using all insecure protocols and start designing and implementing totally different ones. Therefore the solution that started to be adopted was to monitor attacks and try to quickly fix all the major security holes that were exploited by these attacks. The race between crackers discovering new security holes and system managers trying and fixing them started at that time.

One of the first tricks used by crackers to break into systems and take control of them derives from the ancient idea by Ulysses to conquer the city of Troy building a large, wooden statue of a horse. The statue was large enough to hyde several warriors in it. The inhabitants brought themselves the horse containing the enemy warriors inside the city walls. The following night it was easy for the warriors to exit from the horse, kill the few guards at the main gate, and open the doors to the rest of the army. The city was then put to fire and detroyed by a surprise action.

Nowadays a computer troyan horse takes the form of a counterfait program or software module that contains additional code compared to the original. The Troyan horse is imported and executed into a computer system by a legitimate user that believes the program or module is original. The classical example could be a web browser or a mail agent, that in addition to performing its normal function (so that the user has no suspect) sends secret information (for example passwords) to another user on another machine, that can subsequently exploit such information to take control of the system.

Another variation of the same basic schema are viruses. In this case the name obviously derives from biology. Again the idea is that a system is "infected" because of a legitimate user running a counterfait program without knowing it is "contageous." The main difference between a virus and a Troyan horse is usually in the main purpose of the attack, rather than the mechanism exploited to conduct it: usually a Troyan horse is intended to take control of the resources that are under attack in order to exploit them; the system may stay under attack for long time before legitimate users realize that their system has been taken. On the other hand, the primary goal of a computer virus is usually to damage the system and spread the infection as soon as possible to other systems as well.

Denial of Service

Denial of service (DoS) is one of the easiest attack to perpetuate to a computer system offering services in a distributed environment. Said in other words, DoS is one of the most difficult attacks to prevent. The purpose of this attack is to prevent the system from offering its services to its legitimate users. The way in which this objective is usually achieved consists in overloading the system with a huge number of fake service requests. In this case the attacker is not interested in seizing information or resources from the system. The main goal of DoS is simply to make the system useless for its original purpose, thus affecting availability.

There are several situations in which this type of attack must be seriously taken into consideration by a system designer, ranging from military applications to industrial unfair competition. A first defense measure is of course a secure authentication of customers and of their requests, in order to be able to discriminate legitimate, well behaving users from DoS attackers. A second defense step consists in the adoption of appropriate "quality of service" (QoS) policies in order to classify requests in various priority classes, and reserve a sufficient portion of the available resources to the service of higher priority requests. A third (obvious) counter-measure consists in overdimensioning resources available to implement the most critical services.

Viruses and Troyan horses are the favourite methods adopted by attackers to exploit available systems in the network to generate massive amounts of requests to overload other systems. A good defense strategy against DoS attacks must therefore take into account the availability of a large number of easy-to-attack systems outside the control of the organization that runs the servers that could be subject to DoS.

Integrity Violations

The purpose of this kind of attacks is to change part of the information that is stored in a system in a way that is not noticed by legitimate users. Examples may include the criminal that attempts to transfer money to his bank account but also the cracker that tries to install a troyan horse in substitution of a normal software application.

The basic defense tool is offered today by cryptography in the form of digital signatures. We shall come back on the technical aspects of digital signatures. For the moment let us simply remark the need of appropriate organization in support to the adoption of digital signature techniques. The idea is that there must be some entity or person that takes responsability for the integrity of data and signs original data. Users can subsequently check the originality of data by verifying that such data are properly signed by the authority that guarantees its integrity. Counterfait data could not be properly signed by an attacker that does not have access to the signature private key of the data integrity responsible entity. It is vital, therefore, for such a defense measure to work properly, that signature keys are kept strictly confidential by the responsible, and that the responsible always verifies the integrity of the original data before signing them.

Privacy Violations

This type of attack aims at seizing secret information without authorization. Privacy of some data must always be guaranteed in a secure system, even if the purpose of the system is not to keep secret information. Indeed most of the other protection mechanisms are based on cryptographic protocols, and all cryptographic protocols are based on the ability of keeping secrets.

Moreover, even secret data usually must be delivered by the system to some selected legitimate users. Therefore privacy and identification are always intimately related. A system may not be able to guarantee secure identification if it cannot guarantee privacy, and vice-versa. Indeed privacy violations may be the first step in other more elaborate attack strategies.

Again cryptography can be considered one of the main useful techniques to guarantee privacy in some circumstances. However cryptography alone is not sufficient to guarantee privacy in a system. Although strong cyphers exists nowaday that can guarantee the privacy of encrypted data with a very high level of confidence, data must be decrypted at some point. Once data are decrypted (for instance to show them to a legitimate user), only a proper security kernel can guarantee privacy by preventing other users from reading the memory locations in which data are stored. At some point, we always come to the capacity of the Operating System kernel to exploit hardware mechanisms to guarantee privacy.

Moreover, no software technology may guarantee privacy unless at least part of the hardware that is used can be fully trusted. The adoption of cryptography can only extend the security of a small security kernel beyond the physical limitations of the trusted hardware and operating system kernel. Eventually secret information must be displayed in human readable form on a screen, so that only the presence of walls and closed doors may prevent unauthorized users to read them above the shoulders of legitimate users.

Attacks to Reputation

Operationally, this kind of attack could usually be assimilated to integrity attacks. However the purpose of the attack is similar to the one of denial of service attacks, namely to produce damage to the organization that operates the system. In this case the attacker may try to counterfait data and/or signatures in order to make third parties believe that the staff responsible for the system made some action that they did not make.



Security Measures

In order to prevent security attacks, a system designer must take appropriate counter-measures. Of course the first step in identifying and implementing appropriate security measures is a thorough understanding of the strategies and techniques adopted by the attackers.

Strategies of Security Attacks

Most today's computer systems adopt some more or less elaborate forms of protections against security attacks, the minimal ones consisting of password-based user identification. Therefore most often an attacker has to find a way of circumventing such measures in order to perpetrate his/her attack. Several strategies have been developed for this purpose, all of them trying to exploit the weakest point in the security system (which in several cases is the non-expert user rather than the security software).

In many cases passwords can be easily guessed or discovered using simple (and non technological) methods. In other cases lack of physical protection of the hardware may be easily exploited by attackers as well. Hence it is only when users correctly follow security rules and the critical hardware is physically protected that the job of the attacker starts to become an interesting problem to solve from the computer science point of view.

Technological attacks to security are usually carried out taking advantages of errors, bugs, or misconceptions in the software tools or their configuration. Databases are maintained that list "security holes" of various software modules or operating systems, so that an attacker may dispose of a catalogue of attack attempts that he/she can operate, based on the knowledge of the configuration of the system to be attacked.

Also system managers, of course, have access to such databases, so that they should know about the weak points of their systems and whatch. The most secure solution to the problem would be, of course, the removal of software components that are subject to known security attacks. However this is not always possible for organizational reasons. If the software components is critically needed to accomplish the mission of the computer system, the only solution consists in periodically or continuosly monitor the system itself in the hope of discovering the presence of security attacks as soon as possible and try to contrast them when they occur.

Prevention, Reaction, and Reprisal

Depending on the type of attack and the operating conditions of the system, security issues may be handled at different levels. We can identify three different times of reaction to attacks:

Obviously the three strategies are not mutually exclusive. On the contrary, it is good practice never trust a single security mechanism or strategy.

Physical, Logical, and Organizational measures

Security measures can be classified depending on the components of the system they involve.