5.0 Security in the Internet Environment

Security is usually approached from the point of view of identifying the threats (methods of attack) and the associated risks (potential consequences of a successful attack), and then designing protective measures to address these issues (both thwarting attacks, and limiting the damage resulting from a successful attack).

Typical risks include denial of service, unauthorized access to information, unauthorized execution of an action (e.g. transaction), modification of legitimate transactions and data, etc.

While most people tend to think of threats as originating from outside the organization or legitimate community of users, most security breaches (something like 70%) occur from within. However, for the types of information and users associated with TSINs, the main threats are likely to be external, and that is the issue that we focus on here.

Security issues arise, and are dealt with, at several places in the network and attached systems. Very generally speaking, "security" addresses different threats at different "levels" of the overall architecture: the network (and all its many components), the end systems (server and client machines), and the applications themselves. Traditional security objectives include confidentiality, integrity, and availability, as well as legitimate use (of resources). Each of these has associated threats, though the threats differ somewhat depending on the part of the architecture that you are considering. Without undertaking an exhaustive catalogue (see [Ford-95] and [Stallings-95a]), network-level threats, that is threats that might originate within the network, include, for example, denial-of-service, eavesdropping, integrity violation (modification of data), masquerade, replay, and service spoofing. System-level threats include several of the network-level threats plus things like authorization violation, illegitimate use, indiscretions by personnel, theft, trapdoors and Trojan horses (deliberate security flaws in software), etc. Additional application level threats include service spoofing (pretending to be a legitimate service, when you are not), repudiation (of a previous legitimate transaction), etc. We will not address all of these here, but attempt to give a flavor of the kinds of security threat countermeasures that are in use and being developed.

Security in the Internet itself involves the reliable operation of the routers and services essential to the operation of the transport level of the network. These elements are, relatively speaking, few in number and operated by network professionals whose full-time job is to ensure their correct operation. While no human organization or software system is free of bugs that can produce security weaknesses, the single purpose software of routers and the constant attention of people who are aware of the issues, consequences, and solutions, make the Internet infrastructure an infrequently successful target of security-oriented attacks. Successful attacks at this level would probably manifest themselves primarily as service interruptions, which are rare due to any cause, although "wiretapping" and spoofing*

are also potential consequences of successful attacks on the Internet infrastructure.

With respect to specific applications (like the TSIN database servers), most Internet-based security attacks are likely to be indirect. That is, the object of attack is the computer system that runs the application and stores the data, rather than the application itself. The most common mode of this type of attack is through heavily used services (servers), such as mail and ftp, that provide external access to the system.

There are many reasons why indirect attacks are the most common, but it usually comes down to the fact that commonly configured computing systems offer many possible targets, many of which can be replicated and studied by would-be assailants. The targets are typically the servers that provide the common system services of computers attached to the Internet: mail, ftp, remote login, etc. Apart from identifying and correcting all of the design and implementation problems of all of these complex programs (a large and on-going effort by many people in the commercial and academic computer science communities), the best way to reduce vulnerability is to limit the targets or limit the access to them.

The best way to limit targets is to simplify the system. If being a TSIN data server is the only function of a system, then many of the "standard" (and well studied) services do not need to be run, and therefore will not present targets for attack.

The second common way to reduce vulnerability is to limit access by restricting what servers can be accessed from the Internet, or by permitting only known external systems to access the services. These safeguards are accomplished by "firewall" methods that operate most commonly on a router that sits between the end-user system (e.g., the TSIN data server) and the rest of the Internet, but firewall techniques can also be usefully applied on end-user systems themselves. Firewalls are "filters" that screen incoming traffic based on type (what service is being addressed on the inside), originator (large classes of possible originating hosts may be disallowed), receiver, etc. They provide a very effective set of techniques, but do not protect. for example, against all types of address spoofing (a mode of attack where the assailant masquerades as a "friendly" system).

Many security breaches at the system level (both from within and from outside) are due to poor system administration practices. This issue is addressed by an increasingly large body of well publicized knowledge on safe and standard practice for running a system so that it does not inadvertently expose itself to attack, together with operating policy to ensure that safe practice is followed. This kind of information is available from books, articles, professional training, and Internet standards community documents. For example, see [CERT], [Curry-92], [SSH],[Garfinkel-91], [Harvard], [RFC-1244], and [Spafford].

Finally we come to the direct attack threats that most people associate with security issues: the compromise of the data stream and/or the application itself.

Techniques to address threats at this level have developed rapidly over the past few years due to the increasing interest in providing financial services over the Internet. The types of services and techniques that are now available for "end-to-end" security address the issues of authenticated access, data integrity, confidentially, and (potentially) non-repudiation of transactions. The advantage of end-to-end security techniques is that for the aforementioned objectives, one does not have to depend on (trust) security at lower levels. (Which does not mean that lower-level security is not needed. If you network does not function to your data is destroyed on disk, end-to-end security is a moot point.) Most of the end-to-end techniques being adopted by the Internet financial community are based on a technology known as "public key cryptography" and the associated public key certificate infrastructure.

5.1 How do Public Key Certificates Address Internet Security?

Public Key Certificates are a combination of two technologies - public key cryptography and network directory services - that are now being used as the basis of Internet financial transaction security by a wide variety of commercial concerns (e.g. most commercial WWW browsers - Netscape and Spyglass - Mastercard, Visa, Microsoft, etc. - see [SSLNews]).

Public key cryptography uses what is known as asymmetric key-based encryption: Two associated encryption keys are generated as a pair, and documents encrypted with one key are decrypted with the other key. "Conventional" (or "secret key") cryptography uses the same key to encrypt and decrypt, and is therefore called symmetric encryption. The DES standard for "bulk" encryption is a symmetric scheme. See the [RSA] for an introduction to public key cryptography and the excellent book by Warwick Ford [Ford-95] for an in-depth coverage of the whole field.

The asymmetric nature of public key cryptography (PKC) leads to some interesting properties that are the reason for its value and success in securing wide area network-based transactions. The most common way to use PKC is for the originator of a document to encrypt that document with one of the two keys (usually called the "private" key). The encrypted document and the other key (called the "public" key) are then both distributed widely. The public key is then used to decrypt the document. Successful decryption with the public key means that the document could only have been encrypted with the corresponding private key (and therefore by the holder of that key), and that it must be exactly the same as the document that was originally encrypted.

This technique, then, accomplishes two important goals: It verifies the integrity of the document in-hand (it must be an exact copy of the original) and it verifies the authenticity (only the holder of the private key corresponding to the public key that decrypted the document could have "signed" the original).

In a useful variation of this scheme, documents encrypted using a public key can only be decrypted using the corresponding private key, thus providing a means for anyone in possession of the public key to correspond in guaranteed privacy with the holder of the private key. The principle (described above) and the practice, however, are somewhat different. Public key cryptography is computationally expensive, and so is usually used only for "small" messages. Typical examples of such messages include "digital signatures" (small codes that verify the integrity of a message or document, when used with public key certificates described below, but do not disguise it contents), or the secret key for the bulk en/decryption of a document.

Public key technology is useful in and of itself, but a number of other concepts and supporting infrastructure are needed to make it the useful and powerful tool that it is today, and this brings us to certificates.

The pivotal issue is how public keys are generated, irrefutably associated with a know entity (person, corporation, etc.), and then distributed so that the originator of a document can be both identified and verified.

One important concept is that of Certification Authorities, and how they work. (See [Kent] and [Ford-95].) The problem to be solved is how you associate a public key with an entity (e.g. person), and then distribute that public key so that the relationship (of entity to key) cannot be counterfeited. This is accomplished in the following way. A Certification Authority ("CA") is a trusted third party that independently verifies the identity of an individual (or other "entity"), issues a public/private key pair, and then makes publicly available the identity and the corresponding public key of the registrant. The individual, of course, must keep the private key completely confidential.*

The identity and the public key are placed in a document that the CA then "signs" with its private key*

. This document - the "certificate" - is then made publicly available, typically through some sort of network accessible database.

The recipient of a "signed" document retrieves the public key certificate of the presumed author, verifies (through the use of decryption) this certificate with the public key of the CA, extracts the author identity and public key, and then finally verifies the original document. Successful decryption of the original document (or its seal) now verifies its authenticity and the identity of the author. This, then, is the basis of secure, authenticated Internet transactions in an open, scalable, and public network.

There are two other important details to address in order to complete the conceptual framework of public key certificates.

Public key certificates are distributed from network accessible databases. While many variations are possible, the current trend is to use the X.500 network directory server. The primary use of directory servers is to permit e-mail addresses, postal addresses, telephone numbers, etc. to be found for individuals. However, X.500 servers can also supply public key certificates (in a standard form, called an X.509 certificate). These servers are available from anywhere on the Internet and are typically maintained by the parent organization (e.g. a company, university, government laboratory, etc.). This infrastructure, then, provides for the wide distribution of identity-public key information.

The second remaining detail is who issues the certificate and verifies identity. Currently this is done by certification authorities typically associated with a particular community of users. So, for example, Apple Computer has a distributed collaboration environment in which "strong" user authentication is required. To support this activity Apple runs a certification authority. In order to register with this CA (and be issued a public/private key pair) an individual must present documents such as a birth certificate and drivers license to a Notary Public, who signs the registration form. On this basis, the CA provides a certain level of "guarantee" for the identity of the individual associated with the public key. (See [Baum-94] for a detailed discussion of certification authorities and their legal and sociological implications.) RSA, Inc. - the provider of most commercial implementations of public key technology - also runs a CA. Large scale CAs in the future will probably be run by the U. S. Postal Service. The USPS CA will provide several levels of certification (identity guarantees) ranging from anonymous to biometrically verified (e.g. retina scan). USPS sees this, and a variety of other cryptographically-based services, as one of its major commercial services in the future. (See [USPS].)

Given the techniques of public key certificates and the infrastructure of network directory servers, security services and protocols can be developed to address the application-level security issues. The services define how an application achieves security, and the protocols specify how to use the basic information and verification techniques (i.e. what information is presented, when, in what order, etc.) in order to accomplish "strong" user (and server) authentication, confidential and verifiable transactions, etc. Description of the services and protocols is beyond the scope of this overview, but examples include the Generic Security Service Application Program Interface (with "Simple Public-Key Mechanism - SPKM) ([GSSAPI]), the Secure Socket Layer protocol (see [SSLProtocol]), Privacy Enhanced Mail ([Kent]), MIME Object Security Services ([MOSS]), and The Secure HyperText Transfer Protocol (and associated services)([SHTTP]).

Two more comments are needed in the interest of completeness. The first is that public key cryptography is almost always used in conjunction with symmetric cryptography (e.g., DES). For the exchange of confidential information PKC is typically used for the authentication and then to provide secure messaging to exchange temporary secret keys for DES encryption/decryption. These temporary keys are sometimes called "session keys" and are used by the two parties for a limited period of time to encrypt and decrypt documents and data. The reason for this is that the DES is much more computationally efficient for "bulk" encryption of data streams.

An important final note is the following. As we have seen, security is a many faceted thing, involving policy, operational practice, network servers, client programs, etc. At the most elementary level there are two types of computer code whose correctness is essential for the underlying concepts to provide security. The first is the cryptographic algorithms themselves (e.g. DES) and the second is the protocols that use these basic algorithms to accomplish conditions like user authentication. Experience has shown that good implementations are those that have stood the test of time (and attacks), and have had their weaknesses identified and corrected. Weaknesses can occur in the protocols (somewhere in a sequence of operations, a critical piece of information is inadvertently exposed) or in the computer code that implements the algorithms and protocols. A good example of this process was reported in a article in the New York Times [NYT-9-19-95]. The Secure Sockets Layer [SSL] used by Netscape for their WWW transaction security is a protocol. Part of the implementation of this protocol involves generating encryption keys on the fly (session keys). Such keys require a "seed" number to generate. If the seed is not a sufficiently random number, it may be possible to break the encryption. The Netscape implementation made the mistake of not using a sufficiently random variable, thus making it possible to break the encryption based on the keys generated with those seeds. Two Computer Science graduate students discovered the weakness, broke the encryption, and announced the fact on an Internet bulletin board. Netscape has now changed the way it generates seeds, carefully re-examined the rest of its code, and a more mature and unbreakable security is the result. In other words, their implementation will have moved toward the maturity that makes for really strong security.

[Go To Next Section]

[Go To Last Section]

[Go To First Page]