Information security software at school. Information security tools. Let us recall that identification is usually understood as assigning unique identifiers to access subjects and comparing such identifiers with a list of possible ones. In turn, a

7.1 Protection of information in electronic payment systems

An electronic payment system (EPS) is a system for conducting payments between financial and business organizations (on the one hand), and Internet users (on the other hand) in the process of buying and selling goods and services via the Internet. It is EPS that allows you to turn an order processing service or an electronic storefront into a full-fledged store with all standard attributes: by selecting a product or service on the seller’s website, the buyer can make a payment without leaving the computer. EPS is an electronic version of traditional payment systems.

In the e-commerce system, payments are made subject to the following conditions:

A. Maintaining confidentiality. When making payments via the Internet, the buyer wants his data (for example, credit card number) to be known only to organizations that have the legal right to do so.

B. Maintaining the integrity of information. Purchase information cannot be changed by anyone.

B. Authentication. See clause 7.2.

D. Means of payment. Possibility of payment using any means of payment available to the buyer.

E. Seller Risk Guarantees. When trading online, the seller is exposed to many risks associated with product refusal and buyer dishonesty. The magnitude of the risks should be related to the payment system provider and other organizations included in trade chains through special agreements.

AND. Minimizing transaction fees. The fee for processing order transactions and payment for goods is, of course, included in their price. Therefore, reducing the transaction price reduces the cost price and increases the cost of the product. It is important to note that the transaction price must be paid in any case, even if the buyer refuses the order.

7.2.1 Identification (from Late Latin identifico - I identify), recognition of identity, identification of objects, recognition. Identification widely used in mathematics, technology and other sciences (law, etc.), for example in algorithmic languages use transaction identifier symbols; cash registers carry out Identification coins by their mass and shape, etc. To the main tasks Identification include: pattern recognition, formation of analogies, generalizations and their classification, analysis of sign systems, etc. Identification establishes the correspondence of the recognized object to its image - an object called an identifier. Identifiers are usually signs of mutually corresponding objects; Identical objects are considered equivalent, that is, having the same meaning and meaning.

7.2.2 Authentication– – a procedure for establishing compliance of parameters characterizing a user, process or data with specified criteria. Authentication, as a rule, is used to verify the user’s access rights to certain resources, programs, and data. The matching criterion is usually the coincidence of information entered into the system in advance and received during the authentication process, for example, about the user’s password, his fingerprint or the structure of the retina. In electronic payment systems, authentication is a procedure that allows the seller and buyer to be confident that all parties involved in a transaction are who they say they are.

7.2.3 Authorization- this is a procedure during which you enter your name, for example, when registering on the Odnoklassniki website (nickname) and a password, which you also specify during registration. After authorization, the site service “recognizes you” under this name, giving you access to those pages and to those functions that are available for the entered name. Authorization in local network performs the same functions.

7.3 Securing ATMs

What an ATM is and what its functions are is well known. ATM security tools provide multi-level protection of operations - organizational, mechanical, optical, electronic, software - up to the installation of an alarm system with a video camera (optical protection). It is possible to install a video camera with a video recorder that records all user actions with the ATM.

Software protection of the ATM is provided by the card PIN code and software for its recognition. Organizational protection consists of placing the part of the ATM where the cassettes with banknotes are stored in a prominent place in the operating room or other clearly visible place, or in an isolated room. Refilling of cassettes is carried out by cash collectors either at the end of the working day when there are no clients, or when clients are removed from the operating room. To protect against vandalism, special booths are used, for example, from DIEBOLD. The booth in which one or more ATMs are installed is locked using electronic locks. The locks allow only cardholders to enter the cabin and are protected by an alarm system.

Mechanical protection is provided by storing banknote cassettes in safes of various designs (UL 291, RAL-RG 626/3, C1/C2). They vary in size, wall thickness, and weight. Safes are locked with various locks with keys, with a single or double digital key, with an electronic key (electronic security).

To prevent ATM hacking, sensors for various purposes with an alarm system are used. Thermal sensors, for example, detect attempts to plasma cut metal. Seismic sensors detect attempts to remove the ATM (electronic security).

7.4 Security electronic payments via the Internet

EPS are divided into debit and credit. Debit EPS work with electronic checks and digital cash. Checks (electronic money, for example, money in bank accounts) are issued by the issuer who manages the EPS. Using issued checks, users make and accept payments online. A check (analogous to a paper check) is an electronic order from a client to his bank to transfer money. At the bottom of the electronic check – electronic digital signature(EDS). The protection of information in debit EPS is carried out precisely with the help of an electronic digital signature, which uses a public key encryption system.

Credit EPS use credit cards, the work with which is similar to the work with cards in other systems. The difference is that all transactions in credit EPS are carried out via the Internet. Therefore, in credit EPS there is also the possibility of an attacker intercepting card details online. Information in credit EPS is protected by secure transaction protocols (for example, the SSL (Secure Sockets Layer) protocol), as well as the SET (Secure Electronic Transaction) standard, designed to eventually replace SSL when processing transactions related to payments for credit card purchases over the Internet .

7.5 Software for protecting information stored on personal computers

Most computers nowadays are connected to the Internet. In addition to useful information, harmful information can also penetrate your computer from the Internet. And if spam only clogs the computer, then the Internet can be a source of viruses, hacker attacks on the computer and other malware. To protect information stored on personal computers from malware, various antivirus programs (AP), firewalls, anti-hackers, and anti-Trojans are used. The main ones are AP and firewalls. APs are discussed in detail in subsection 7.8.

A firewall, also known as a firewall or firewall, is a program that filters network packets at various levels in accordance with specified rules. The main task of a firewall is protection computer networks or individual nodes from unauthorized access. The firewall does not allow packets that do not match the criteria defined in the configuration, i.e. stops malware from entering your computer. In Minsk there is a branch of approximately 70 programmers (20% of the total number) of the famous firewall developer Check Point.

7.6 Methods for organizing access control

The main functions of the access control system (ARS) are:

Implementation of rules for restricting access (AD) of subjects and their processes to data;

Implementation of PRD of subjects and their processes to devices for creating hard copies;

Isolation of process programs performed in the interests of the subject from other subjects;

Managing data flows to prevent data from being written to inappropriate media;

Implementation of rules for data exchange between subjects for automated systems (AS) and computer facilities built on network principles.

The functioning of the DRS is based on the selected method of access control. The most direct way to ensure data security is to provide each user with a computing system that is their own. In a multi-user system, similar results can be achieved using a virtual computer model.

In this case, each user has his own copy of the operating system. A virtual personal computer monitor for each copy of the operating system will create the illusion that there are no other copies and that the objects that the user has access to are only his objects. However, separating users does not use resources efficiently automated system(AS).

In ASs that allow sharing of access objects, there is a problem of distribution of powers of subjects in relation to objects. The most complete model for the distribution of powers is the access matrix. The access matrix is ​​an abstract model for describing an authorization system.

The rows of the matrix correspond to subjects, and the columns to objects; matrix elements characterize access rights (read, add information, change information, execute a program, etc.). To change access rights, the model can, for example, contain special ownership and control rights. If a subject owns an object, it has the right to change the access rights of other subjects to that object. If a subject controls another subject, it can remove that subject's access rights or transfer its own access rights to that subject. In order to implement the control function, subjects in the access matrix must also be defined as objects.

Elements of the authority establishment matrix (access matrix) may contain pointers to special procedures that must be executed each time a given subject attempts to access an object and make a decision about the possibility of access. The following rules can serve as the basis for such procedures:

The access decision is based on the access history of other objects;

The access decision is based on the dynamics of the system state (the access rights of a subject depend on the current rights of other subjects);

The access decision is based on the value of certain internal system variables, such as time values, etc.

In the most important systems, it is advisable to use procedures in which the decision is made based on the values ​​of intra-system variables (access time, terminal numbers, etc.), since these procedures narrow access rights.

Access matrices are usually implemented in two main ways - either in the form of access lists or mandate lists. An access list is assigned to each object, and it is identical to the column of the access matrix corresponding to that object. Access lists are often placed in file dictionaries. A mandate list is assigned to each subject, and it is equivalent to the row of the access matrix corresponding to that subject. When a subject has access rights to an object, the pair (object - access rights) is called the object's mandate.

In practice, access lists are used when creating new objects and determining the order in which they are used or changing access rights to objects. On the other hand, mandate lists combine all the access rights of a subject. When, for example, a program is executed, the operating system must be able to effectively detect the program's authority. In this case, capability lists are more convenient for implementing the authorization mechanism.

Some OS support both access lists and mandate lists. At the start, when a user logs on to a network or starts executing a program, only access lists are used. When a subject attempts to access an object for the first time, the access list is parsed and the subject's rights to access the object are checked. If there are rights, then they are assigned to the subject’s mandate list and access rights are further verified by checking this list.

When using both types of lists, the access list is often located in the file dictionary, and the capabilities list is in random access memory when the subject is active. In order to increase efficiency in technical support, a mandate register can be used.

The third method of implementing an access matrix is ​​the so-called lock and key mechanism. Each subject is assigned a pair (A, K), where A is a certain type of access, and K is a fairly long sequence of characters called a lock. Each subject is also assigned a sequence of characters called a key. If a subject wants to gain type A access to a certain object, then it is necessary to check that the subject owns the key to the pair (A, K) assigned to a specific object.

The disadvantages of using access matrices with all access subjects and objects include the large dimension of the matrices. To reduce the dimension of authority matrices, various compression methods are used:

Establishment of user groups, each of which represents a group of users with identical powers;

Distribution of terminals by authority classes;

Grouping of protected data elements into a number of categories in terms of information security (for example, by confidentiality levels).

Based on the nature of access control, access control systems are divided into discretionary and mandatory.

Discretionary access control makes it possible to control the access of named subjects (users) to named objects (files, programs, etc.). For example, object owners are given the right to restrict other users' access to that object. With such access control, each pair (subject-object) must be given an explicit and unambiguous listing of the allowed access types (read, write, etc.), i.e. those types of access that are authorized for a given subject to a given object. However, there are other access control problems that cannot be solved by discretionary control alone. One of these tasks is to allow the AS administrator to control the creation of access control lists by object owners.

Mandatory access control allows you to divide information into certain classes and control the flow of information when crossing the boundaries of these classes.

Many systems implement both mandatory and discretionary access control. At the same time, discretionary access control rules are in addition to mandatory ones. A decision on the authorization of an access request should only be made if it is simultaneously authorized by both discretionary and mandatory DRPs. Thus, not only a single act of access, but also information flows must be controlled.

Providing tools for the access control system perform the following functions:

Identification and recognition (authentication) of subjects and maintaining the binding of the subject to the process performed for the subject;

Registration of the actions of the subject and its process;

Providing opportunities to exclude and include new subjects and access objects, as well as changing the powers of subjects;

Reaction to unauthorized access attempts, for example, alarms, blocking, restoration of the protection system after unauthorized access;

Testing of all information security functions using special software;

Clearing RAM and workspaces on magnetic media after the user has finished working with protected data by twice arbitrary recording;

Accounting for output printed and graphic forms and hard copies in the AS;

Monitoring the integrity of the software and information parts of both the data storage system and the means that support it.

For each event the following information, date and time must be recorded; the subject performing the registered action; event type (if an access request is logged, the object and access type should be noted); whether the event was successful (the access request was served or not).

The issuance of printed documents must be accompanied by automatic marking of each sheet (page) of the document with a serial number and accounting details of the AS, indicating on the last sheet the total number of sheets (pages). Along with the issuance of a document, a document registration card can be automatically issued indicating the date of issue of the document, the accounting details of the document, summary(name, type, code) and the level of confidentiality of the document, the name of the person who issued the document, the number of pages and copies of the document.

Created protected files, directories, volumes, areas of RAM of a personal computer allocated for processing protected files, external devices and communication channels are subject to automatic accounting.

Funds such as protected media must be documented, using logs or filing cabinets, and recording the issuance of media. In addition, several duplicative types of accounting may be carried out.

The response to unauthorized access attempts (AAT) can have several options:

Exclusion of the subject of the non-discrimination from the work of the AS at the first attempt to violate the rules of regulations or after exceeding a certain number of permitted errors;

The work of the unauthorized action subject is stopped, and information about the unauthorized action is received by the AS administrator and a special program for working with the intruder is activated, which simulates the operation of the AS and allows the network administration to localize the location of the unauthorized action attempt.

The implementation of an access control system can be carried out using both software and hardware methods or a combination of both. Recently, hardware methods of protecting information from unauthorized access have been intensively developing due to the fact that: firstly, the element base is intensively developing, secondly, the cost of hardware is constantly decreasing and, finally, thirdly, hardware implementation of protection is more efficient in performance than software

7.7 Information integrity control

The integrity of information is the absence of signs of its destruction or distortion. Information integrity means that the data is complete. Integrity is the condition that the data has not been changed during any operation on it, be it transmission, storage or presentation. The task of integrity monitoring must be approached from two positions. First, it is necessary to answer the question for what purpose integrity control is implemented. The fact is that if the delimitation policy for access to resources is correctly implemented, their integrity cannot be violated without authorization. This suggests the conclusion that the integrity of resources should be monitored in the case when it is impossible to implement correct access control (for example, launching an application from an external drive - for external drives the closedness of the software environment can no longer be implemented), or on the assumption that the demarcation policy can be overcome by an attacker. This is a completely reasonable assumption, because... It is impossible to build an information security system from NSD that provides 100% protection even theoretically. Secondly, it is necessary to understand that integrity control is a very resource-intensive mechanism, therefore, in practice, control (and even more so with high intensity, otherwise this control does not make sense) is permissible only for very limited objects.

The fundamental feature of information protection at the application level is that the implementation of any restrictive policy of access to resources (the main task of information security from NSD) at this level is not permissible (potentially easily overcome by an attacker). At this level, only control tasks based on the implementation of comparison functions with a standard can be solved. In this case, it is a priori assumed that events that have already occurred can be compared with the standard. Those. The task of protection at the application level is not to prevent an unauthorized event, but to identify and record the fact that an unauthorized event has occurred.

Let's look at the advantages and disadvantages of protection at the application level, compared to protection at the system level. The main disadvantage is that at the application level it is generally impossible to prevent an unauthorized event, because the very fact that the event occurred is controlled, so it is only possible to respond to such an event (as quickly as possible) in order to minimize its consequences.

The main advantage is that the fact that an unauthorized event has occurred can almost always be registered, regardless of the reasons for its occurrence (since the very fact of such an event is registered). Let us illustrate what has been said simple example. One of the main protection mechanisms in the information security system against unauthorized access is the mechanism for ensuring the closedness of the software environment (the essence is to prevent any third-party processes and applications from running, regardless of how they are implemented on the computer). This problem must be solved at the system level. When solving the problem at the system level, the security driver intercepts all requests to launch the executable file and analyzes them, ensuring that only allowed processes and applications can be launched. When solving a similar problem at the application level, an analysis is carried out of what processes and applications are running, and if it is revealed that an unauthorized process (application) is running, it is completed by a protection tool (the response of the information security system from unauthorized access to an unauthorized event). As we can see, the advantage of implementation at the system level is that it should, in principle, prevent the launch of unauthorized processes (applications), while when implemented at the application level, the event is recorded upon the fact of its occurrence, i.e. in this case - after the process has been launched, as a result, before it is completed by the protection tool (if such a reaction to such an event is set), this process may perform some unauthorized action (at least part of it, why The most important condition here is a prompt response to a detected event). On the other hand, who can guarantee that the system driver decides this task protection correctly and in full, and the potential danger associated with errors and bookmarks in system and application software, etc.? In other words, you can never guarantee that a system driver cannot be bypassed by an attacker under certain conditions. What we get in the first case is that the administrator will not even know that unauthorized access has been committed. When implementing a solution to a problem at the application level, the reason that led to the occurrence of an unauthorized event is no longer important, since the very fact of the occurrence of this event is recorded (even if it is caused by the use of errors and software tabs). In this case, we will register that the event occurred, however, we will not be able to prevent it in full, we can only try to minimize the consequences.

Taking into account the above, we can draw the following important conclusion. Protection mechanisms designed to solve the same problem at the system and application levels should in no case be considered as alternative solutions. These solutions complement each other as they provide completely different protection properties. Therefore, when implementing effective protection(primarily, we are talking about corporate applications) the most critical tasks must be solved simultaneously in both ways: both at the system and at the application levels.

7.8 Methods of protection against computer viruses

There is still no exact term defining a viral (malicious) program (VP, “computer virus”) in science. This term was first used by Lehigh University (USA) employee Dr. Fred Cohen in 1984 at the 7th Information Security Conference held in the USA. The definition given to a virus by Dr. Fred Cohen was: “A computer virus is a sequence of symbols on a Turing machine tape: “a program that is capable of infecting other programs by modifying them to insert into them the most identical copy of itself.” The term “computer virus” in Western literature sounds like: “A self-copying program that can “infect” other programs, changing them or their environment so that a request to an “infected” program implies a request to a maximally identical, and in most cases functionally similar , copies of the “virus”.

There is also no information about the first EP. It is only known that in the late 60s - early 70s of the 20th century, the very popular game “ANIMAL” was created on the Univac 1108 machine, which created copies of itself in system libraries. We already discussed threats to computer security due to viruses in the first practical lesson. To fend off these threats, there are special anti-virus programs (AV). A special standard has been developed - STB P 34.101.8 “Software protection against exposure malware and antivirus software. General requirements". According to STB P 34.101.8 VP is a program code (executable or interpreted) that has the property of unauthorized influence on the “object” information technology».

Types of viruses. A. A Trojan program (trojan) is a VP that is not capable of creating copies of itself and is not capable of distributing its body in “information technology objects.” B. Dropper - a VP that is not capable of creating copies of itself, but introduces another “malicious program”, disembodied worms, etc. into the “information technology object” (see in the presentation). Classification of computer viruses by habitat see presentation.

The most widely used antivirus programs (AV) in Minsk are Kaspersky Anti-Virus, for example, Kaspersky Internet Security up to version 11, antivirus Bit Defender Internet Security, Panda Internet Security, Avast! Free Antivirus 5.0 Final, Avira AntiVir Personal Edition 10.0.0, Dr Web 6.0 antivirus, AP LLC "BlockAda Virus". However, modern APs have a number of problems, which are divided into ideological and technical.

Ideological problems are associated, firstly, with the increase in the amount of work on analyzing the virus code due to the expansion of the concept of VP, and secondly, with the complexity of software classification, which depends either on the software configuration or on the method of installing the software. In this case, the decision to eliminate problems with viruses is shifted to the user.

Technical problems include the constant emergence of new complex VPs, as well as delays in VP detection. The emergence of complex VPs causes the complication of algorithms for detecting and neutralizing viruses. This, in turn, leads to a redistribution of computer resources: an increase in AV protection, a decrease in applied problems. This problem is resolved by updating the computer fleet, as well as optimizing AV algorithms. To implement the last step, it is necessary to implement a processor emulator in assembly language, as well as use a dynamic translator.

To eliminate the delay in virus detection, the MalwareScope technology was invented, which makes it possible to detect unknown representatives of known virus families without updating anti-virus databases. You can also use heuristic analysis to detect the presence of “false positive” and “false negative” errors. This method, however, is characterized by high labor intensity in identifying virus code fragments typical for the VP family. To reduce labor intensity, a software robot has been developed that automates the process of adjusting heuristic entries. In addition to heuristic error analysis, you can also use behavioral analyzers/blockers, which are applicable, however, only to protect the object on which they are installed.

Information security software means special programs, included in the composition software The KS is exclusively for performing protective functions.

The main software tools for information security include:

  • * identification and authentication programs for CS users;
  • * programs for restricting user access to CS resources;
  • * information encryption programs;
  • * protection programs information resources(system and application software, databases, computer training tools, etc.) from unauthorized modification, use and copying.

It must be understood that by identification, in relation to security information security KS, understand unambiguous recognition unique name subject of the CS. Authentication means confirming that the name presented corresponds to a given subject (confirming the authenticity of the subject)5.

Information security software also includes:

  • * programs for destroying residual information (in blocks of RAM, temporary files, etc.);
  • * audit programs (maintaining logs) of events related to the safety of the CS to ensure the possibility of recovery and proof of the fact of the occurrence of these events;
  • * programs for simulating work with a violator (distracting him to obtain supposedly confidential information);
  • * test control programs for CS security, etc.

The advantages of information security software include:

  • * ease of replication;
  • * flexibility (can be customized to various conditions applications that take into account the specifics of threats to the information security of specific CS);
  • * ease of use - some software tools, for example encryption, operate in a “transparent” (invisible to the user) mode, while others do not require any new (compared to other programs) skills from the user;
  • * virtually unlimited possibilities for their development by making changes to take into account new threats to information security.

Rice. 4

Rice. 5

Disadvantages of information security software include:

  • * reducing the effectiveness of the CS due to the consumption of its resources required for the functioning of protection programs;
  • * lower performance (compared to hardware security tools that perform similar functions, such as encryption);
  • * the docking of many software protection tools (and not their arrangement in the CS software, Fig. 4 and 5), which creates a fundamental possibility for an intruder to bypass them;
  • * the possibility of malicious changes in software protection during the operation of the CS.

Security at the operating system level

The operating system is the most important software component of any computer, therefore the overall security of the information system largely depends on the level of implementation of the security policy in each specific OS.

Operating room family Windows systems 2000, Millenium - these are clones, initially aimed at working on home computers. These operating systems use protected mode privilege levels, but do not do any additional checks or support security descriptor systems. As a result, any application can access the entire amount of available RAM with both read and write rights. Measures network security are present, however, their implementation is not up to par. Moreover, in Windows versions XP, a fundamental mistake was made that made it possible to remotely cause the computer to freeze in just a few packets, which also significantly undermined the reputation of the OS; in subsequent versions many steps were taken to improve the network security of this clone6.

Operating system generation Windows Vista, 7 is already a much more reliable development by MicroSoft. They are truly multi-user systems that reliably protect files of different users on the hard drive (however, data is not encrypted and the files can be read without problems by booting from the disk of another operating system - for example, MS-DOS). These operating systems actively use the capabilities of protected mode Intel processors, and can reliably protect the data and process code from other programs, unless the process itself wants to provide additional access to them from outside the process.

Over the long period of development, many different network attacks and security errors were taken into account. Corrections for them were released in the form of service packs.

Another branch of clones grows from the UNIX operating system. This OS was initially developed as a network and multi-user OS, and therefore immediately contained information security tools. Almost all widespread UNIX clones have gone through a long development process and, as they were modified, took into account all the attack methods discovered during this time. They have proven themselves quite well: LINUX (S.U.S.E.), OpenBSD, FreeBSD, Sun Solaris. Naturally, all of the above applies to latest versions these operating systems. The main errors in these systems no longer relate to the kernel, which works flawlessly, but to system and application utilities. The presence of errors in them often leads to the loss of the entire safety margin of the system.

Main components:

Local security administrator - responsible for unauthorized access, checks user permissions to log in to the system, supports:

Audit - checking the correctness of user actions

Account Manager - database support for users of their actions and interactions with the system.

Security monitor - checks whether the user has sufficient access rights to the object

Audit log - contains information about user logins, records work with files and folders.

Authentication Packet - Analyzes system files, to ensure that they have not been replaced. MSV10 is the default package.

Windows XP added:

you can assign passwords for archived copies

File replacement protection tools

delimitation system ... by entering a password and creating an account of user records. Archiving can be carried out by a user who has such rights.

NTFS: access control to files and folders

In XP and 2000 there is a more complete and deeper differentiation of user access rights.

EFS - provides encryption and decryption of information (files and folders) to limit access to data.

Cryptographic protection methods

Cryptography is the science of ensuring data security. She is looking for solutions to four important security problems - confidentiality, authentication, integrity and participant control. Encryption is the transformation of data into an unreadable form using encryption-decryption keys. Encryption allows you to ensure confidentiality by keeping information secret from those to whom it is not intended.

Cryptography deals with the search and study of mathematical methods for transforming information (7).

Modern cryptography includes four major sections:

symmetric cryptosystems;

public key cryptosystems;

electronic signature systems;

key management.

The main areas of use of cryptographic methods are the transfer of confidential information through communication channels (for example, e-mail), establishing the authenticity of transmitted messages, storing information (documents, databases) on media in encrypted form.

Disk encryption

An encrypted disk is a container file that can contain any other files or programs (they can be installed and launched directly from this encrypted file). This disk is accessible only after entering the password for the container file - then another disk appears on the computer, recognized by the system as logical and working with it is no different from working with any other disk. After disconnecting the disk, the logical disk disappears; it simply becomes “invisible”.

Today, the most common programs for creating encrypted disks are DriveCrypt, BestCrypt and PGPdisk. Each of them is reliably protected from remote hacking.

Common features of the programs: (8)

  • - all changes to information in the container file occur first in RAM, i.e. HDD always remains encrypted. Even if the computer freezes, the secret data remains encrypted;
  • - programs can block a hidden logical drive after a certain period of time;
  • - they are all distrustful of temporary files (swap files). It is possible to encrypt all confidential information, which could end up in the swap file. Very effective method hiding information stored in a swap file means disabling it altogether, while not forgetting to increase the computer’s RAM;
  • - physics hard drive is such that even if others are written over some data, the previous record will not be completely erased. With the help of modern magnetic microscopy (Magnetic Force Microscopy - MFM), they can still be restored. With these programs you can securely delete files from your hard drive without leaving any trace of their existence;
  • - all three programs store confidential data in a securely encrypted form on the hard drive and provide transparent access to this data from any application program;
  • - they protect encrypted container files from accidental deletion;
  • - copes well with Trojan applications and viruses.

User identification methods

Before gaining access to the computer, the user must identify himself, and network security mechanisms then authenticate the user, i.e., check whether the user is who he claims to be. In accordance with the logical model of the protection mechanism, the aircraft are located on a working computer, to which the user is connected through his terminal or in some other way. Therefore, identification, authentication and authorization procedures are performed at the beginning of the session on the local desktop computer.

Later, when various network protocols are installed and before access to network resources, identification, authentication and authorization procedures may be re-enabled on some remote worker computers to host the required resources or network services.

When a user starts working on a computer system using a terminal, the system prompts for his name and an identification number. In accordance with the user's answers, the computer system identifies him. In a network, it is more natural for objects establishing mutual communication to identify each other.

Passwords are just one way to verify authenticity. There are other ways:

  • 1. Predefined information at the user's disposal: password, personal identification number, agreement on the use of special encoded phrases.
  • 2. Elements hardware at the user's disposal: keys, magnetic cards, microcircuits, etc.
  • 3. Characteristic personal characteristics user: fingerprints, retinal pattern, figure size, voice timbre and other more complex medical and biochemical properties.
  • 4. Characteristic techniques and features of user behavior in real time: dynamics features, keyboard style, reading speed, ability to use manipulators, etc.
  • 5. Habits: using specific computer routines.
  • 6. User skills and knowledge due to education, culture, training, background, upbringing, habits, etc.

If someone wishes to log into a computing system through a terminal or execute a batch job, the computing system must authenticate the user. The user himself, as a rule, does not verify the authenticity of the computer system. If the authentication procedure is one-sided, such a procedure is called one-way object authentication (9).

Specialized information security software.

Specialized software tools for protecting information from unauthorized access generally have better opportunities and characteristics than built-in network OS tools. In addition to encryption programs, there are many other external information security tools available. Of the most frequently mentioned, the following two systems should be noted that allow limiting information flows.

Firewalls - firewalls (literally firewall - fire wall). Special intermediate servers are created between the local and global networks, which inspect and filter all network/transport level traffic passing through them. This allows you to dramatically reduce the threat of unauthorized access from outside to corporate networks, but does not eliminate this danger completely. A more secure version of the method is the masquerading method, when all traffic originating from the local network is sent on behalf of the firewall server, making the local network practically invisible.

Proxy-servers (proxy - power of attorney, trusted person). All network/transport layer traffic between the local and global networks is completely prohibited - there is simply no routing as such, and calls from the local network to the global network occur through special intermediary servers. It is obvious that with this method of inversion from global network to local ones become impossible in principle. It is also obvious that this method does not provide sufficient protection against attacks on more high levels- for example, at the application level (viruses, Java and JavaScript code).

Let's take a closer look at how the firewall works. This is a method of protecting a network from security threats posed by other systems and networks by centralizing access to the network and controlling it through hardware and software. A firewall is a protective barrier made up of several components (for example, a router or gateway that runs the firewall software). The firewall is configured in accordance with the organization's internal network access control policy. All incoming and outgoing packets must pass through the firewall, which allows only authorized packets to pass through.

A packet filtering firewall is a router or computer running software configured to reject certain types incoming and outgoing packets. Packet filtering is carried out based on the information contained in the TCP and IP headers of packets (sender and recipient addresses, their port numbers, etc.).

Expert level firewall - checks the contents of received packets at three levels of the OSI model - network, session and application. To accomplish this task, special packet filtering algorithms are used to compare each packet with a known pattern of authorized packets.

Creating a firewall relates to solving the problem of shielding. The formal formulation of the screening problem is as follows. Let there be two sets information systems. A screen is a means of delimiting access of clients from one set to servers from another set. The screen carries out its functions by controlling all information flows between two sets of systems (Fig. 6). Stream control consists of filtering them, possibly performing some transformations.

At the next level of detail, it is convenient to think of a screen (semi-permeable membrane) as a series of filters. Each of the filters, having analyzed the data, can delay (not miss) it, or can immediately “throw” it off the screen. In addition, it is possible to transform data, transfer a portion of data to the next filter to continue analysis, or process data on behalf of the recipient and return the result to the sender (Fig. 7).


Rice. 7

In addition to access control functions, screens record information exchange.

Usually the screen is not symmetrical; the concepts of “inside” and “outside” are defined for it. In this case, the shielding task is formulated as protecting the internal area from a potentially hostile external one. Thus, firewalls (FiW) are most often installed to protect the corporate network of an organization that has access to the Internet.

Shielding helps maintain the availability of internal domain services by reducing or eliminating the load caused by external activity. The vulnerability of internal security services is reduced, since the attacker must initially overcome the screen where the protective mechanisms are configured especially carefully. In addition, the shielding system, in contrast to the universal one, can be designed in a simpler and, therefore, safer way.

Shielding also makes it possible to control information flows directed to the external area, which helps maintain the confidentiality regime in the organization's information system.

Shielding can be partial, protecting certain information services(for example, shielding Email).

A limiting interface can also be thought of as a type of shielding. An invisible target is difficult to attack, especially with a fixed set of weapons. In this sense, the Web interface has natural security, especially when hypertext documents are generated dynamically. Each user sees only what he is supposed to see. An analogy can be drawn between dynamically generated hypertext documents and representations in relational databases data, with the significant caveat that in the case of the Web, the possibilities are much wider.

The screening role of a Web service is clearly manifested when this service performs intermediary (more precisely, integrating) functions when accessing other resources, for example, database tables. This not only controls the flow of requests, but also hides the real organization of the data.

Architectural Security Aspects

It is not possible to combat the threats inherent in the network environment using universal operating systems. The Universal OS is a huge program that most likely contains, in addition to obvious errors, some features that can be used to illegally gain privileges. Modern technology programming does not allow making such large programs safe. In addition, an administrator dealing with a complex system is not always able to take into account all the consequences of the changes made. Finally, in a universal multi-user system, security holes are constantly created by the users themselves (weak and/or rarely changed passwords, poorly set access rights, an unattended terminal, etc.). The only promising path is related to the development specialized services security, which, due to their simplicity, allow formal or informal verification. A firewall is just such a tool, allowing further decomposition associated with servicing various network protocols.

The firewall is located between the protected (internal) network and the external environment (external networks or other segments of the corporate network). In the first case we talk about external ME, in the second - about internal ME. Depending on your point of view, an external firewall can be considered the first or last (but not the only) line of defense. The first is if you look at the world through the eyes of an external attacker. The latter - if we strive to protect all components of the corporate network and suppress illegal actions of internal users.

A firewall is an ideal place to embed active auditing capabilities. On the one hand, at both the first and last defensive line, identifying suspicious activity is important in its own way. On the other hand, ME is capable of implementing an arbitrarily powerful reaction to suspicious activity, up to and including breaking the connection with the external environment. However, you need to be aware that connecting two security services could, in principle, create a gap that could facilitate accessibility attacks.

It is advisable to entrust the firewall with the identification/authentication of external users who need access to corporate resources (supporting the concept single sign-on to the network).

Due to the principles of defense echelons for defense external connections Usually two-component shielding is used (see Fig. 8). Primary filtering (for example, blocking SNMP management protocol packets, dangerous with attacks availability, or packets with certain IP addresses included in the “black list”) is carried out by a border router (see also the next section), behind which there is a so-called demilitarized zone (a network with moderate security trust, where external information services of the organization are located - Web, email, etc.) and the main firewall that protects the internal part of the corporate network.

Theoretically, a firewall (especially an internal one) should be multi-protocol, but in practice the dominance of the TCP/IP protocol family is so great that supporting other protocols seems like an overkill that is detrimental to security (the more complex the service, the more vulnerable it is).


Rice. 8

Generally speaking, both external and internal firewalls can become a bottleneck as the volume of network traffic tends to grow rapidly. One approach to solving this problem involves dividing the firewall into several hardware parts and organizing specialized intermediary servers. The primary firewall can roughly classify incoming traffic by type and delegate filtering to appropriate intermediaries (for example, an intermediary that analyzes HTTP traffic). Outgoing traffic is first processed by an intermediary server, which can also perform functionally useful actions, such as caching pages of external Web servers, which reduces the load on the network in general and the main firewall in particular.

Situations where a corporate network contains only one external channel are the exception rather than the rule. On the contrary, a typical situation is when a corporate network consists of several geographically dispersed segments, each of which is connected to the Internet. In this case, each connection must be protected by its own shield. More precisely, we can consider that the corporate external firewall is composite, and it is necessary to solve the problem of consistent administration (management and auditing) of all components.

The opposite of composite corporate firewalls (or their components) are personal firewalls and personal shielding devices. The first are software products that are installed on personal computers and only protect them. The latter are implemented on individual devices and protect a small local network, such as a home office network.

When deploying firewalls, you should adhere to the principles of architectural security we discussed earlier, first of all taking care of simplicity and manageability, the echelon of defense, and the impossibility of transitioning into an insecure state. In addition, not only external but also internal threats should be taken into account.

Archiving and duplication systems

Organization of reliable and effective system Data archiving is one of the most important tasks in ensuring the safety of information on the network. In small networks where one or two servers are installed, the most common method is to install an archiving system directly into the free slots of the servers. In large corporate networks It is most preferable to organize a dedicated specialized archiving server.

Such a server automatically archives information from hard drives servers and workstations in the local area specified by the administrator computer network time, issuing a report on the backup performed.

Storage of archival information of particular value must be organized in a special secured room. Experts recommend storing duplicate archives of the most valuable data in another building, in case of fire or natural disaster. To ensure data recovery in case of failures of magnetic disks, systems have recently been most often used disk arrays- groups of disks operating as a single device, complying with the RAID (Redundant Arrays of Inexpensive Disks) standard. These arrays provide the highest data writing/reading speed, the ability full recovery data and replacement of failed disks in “hot” mode (without disconnecting the remaining disks of the array).

The organization of disk arrays provides for various technical solutions implemented at several levels:

RAID Level 0 simply divides the data stream between two or more drives. The advantage of this solution is that the I/O speed increases in proportion to the number of disks involved in the array.

RAID level 1 consists of organizing so-called “mirror” disks. During data recording, the information on the main disk of the system is duplicated on the mirror disk, and if the main disk fails, the “mirror” disk immediately comes into operation.

RAID levels 2 and 3 provide for the creation of parallel disk arrays, when written to which data is distributed across disks at the bit level.

RAID levels 4 and 5 are a modification of level zero, in which the data flow is distributed across the array disks. The difference is that at level 4 a special disk is allocated to store redundant information, and at level 5 the redundant information is distributed across all disks of the array.

Increasing reliability and protecting data on the network, based on the use of redundant information, are implemented not only at the level individual elements networks, for example disk arrays, but also at the level of network operating systems. For example, Novell implements fault-tolerant versions of the Netware operating system - SFT (System Fault Tolerance):

  • - SFT Level I. The first level involves the creation additional copies FAT and Directory Entries Tables, immediate verification of each data block newly written to the file server, as well as reservation of about 2% of the disk capacity on each hard drive.
  • - SFT Level II additionally contained the ability to create “mirror” disks, as well as duplicating disk controllers, power supplies and interface cables.
  • - The SFT Level III version allows you to use duplicate servers on a local network, one of which is the “master”, and the second, containing a copy of all information, comes into operation if the “main” server fails.

Security analysis

The security analysis service is designed to identify vulnerabilities in order to quickly eliminate them. This service itself does not protect against anything, but it helps to detect (and eliminate) security gaps before an attacker can exploit them. First of all, we do not mean architectural ones (they are difficult to eliminate), but “operational” gaps that appeared as a result of administration errors or due to inattention to updating software versions.

Security analysis systems (also called security scanners), like the active audit tools discussed above, are based on the accumulation and use of knowledge. This refers to knowledge about security gaps: how to look for them, how serious they are, and how to fix them.

Accordingly, the core of such systems is a database of vulnerabilities, which determines the available range of capabilities and requires almost constant updating.

In principle, gaps of a very different nature can be identified: the presence of malware (in particular, viruses), weak user passwords, poorly configured operating systems, insecure network services, uninstalled patches, vulnerabilities in applications, etc. However, the most effective are network scanners (obviously due to the dominance of the TCP/IP protocol family), as well as antivirus agents(10). We classify anti-virus protection as a security analysis tool, without considering it a separate security service.

Scanners can identify vulnerabilities both through passive analysis, that is, studying configuration files, involved ports, etc., and by simulating the actions of an attacker. Some detected vulnerabilities can be eliminated automatically (for example, disinfection of infected files), others are reported to the administrator.

The control provided by security analysis systems is reactive, delayed in nature, it does not protect against new attacks, however, it should be remembered that defense must be layered, and security control as one of the boundaries is quite adequate. It is known that the vast majority of attacks are routine in nature; they are only possible because known security holes remain unfixed for years.

Hardware protection methods include different devices according to the principle of operation, according to technical designs that implement protection against disclosure, leakage and unauthorized access to sources of information. Such tools are used for the following tasks:

  • Detecting data leak lines in different rooms and objects
  • Implementation of special statistical studies of technical methods of ensuring activities for the presence of leak lines
  • Localization of data leak lines
  • Counteraction to non-compliance with data sources
  • search and detection of traces of espionage

Hardware can be classified by functionality into detection, measurement, search, passive and active countermeasures. Also, funds can be divided by ease of use. Device developers are trying to increasingly simplify the principle of working with a device for ordinary users. For example, a group of electromagnetic radiation indicators of the IP type, which have a wide range of incoming signals and low sensitivity. Or a complex for identifying and locating radio bookmarks, which are designed to detect and locate radio transmitters, telephone bookmarks or network transmitters. Or a complex Delta implements:

  • automatic location of microphones in a certain room
  • Accurate detection of any commercially available radio microphones and other emitting transmitters.

Search hardware can be divided into methods for collecting data and examining leak lines. Devices of the first type are configured to localize and search for already implemented NSD tools, and the second type is configured to identify data leakage lines. To use professional search equipment you need a highly qualified user. As in any other field of technology, the versatility of the device leads to a reduction in its individual parameters. From another point of view, there are many different data leak lines due to their physical nature. But large enterprises can afford expensive professional equipment and qualified employees for these issues. And naturally, such hardware will work better in real conditions, that is, identify leak channels. But this does not mean that you should not use simple, cheap search tools. Such tools are easy to use and will perform just as well in highly specialized tasks.

Hardware can be applied to individual parts of the computer, to the processor, RAM, external memory, input/output controllers, terminals, etc. To protect processors, code backup is implemented - this is the creation of additional bits in machine instructions and reserve bits in processor registers. To protect RAM, access restrictions to boundaries and fields are implemented. To indicate the level of confidentiality of programs or information, additional confidentiality bits are used with the help of which programs and information are encoded. Data in RAM requires protection from unauthorized access. From reading the remaining information after processing it in RAM, an erasing circuit is used. This circuit writes a different sequence of characters throughout the entire memory block. To identify the terminal, a certain code generator is used, which is hardwired into the terminal equipment, and it is checked when connected.

Hardware data protection methods are various technical devices and structures that protect information from leakage, disclosure and unauthorized access.

Software protection mechanisms

Protection systems workstation from intrusion by an attacker are very different, and are classified:

  • Protection methods in the computing system itself
  • Personal protection methods that are described by the software
  • Protection methods with data request
  • Active/passive protection methods

Details about this classification can be seen in Fig. 1.

Picture 1

Directions for implementing software information protection

Directions that are used to implement information security:

  • copy protection
  • protection against NSD
  • virus protection
  • communication line protection

For each of the areas, you can use many high-quality software products that are on the market. Also, the Software may have different functionality:

  • Monitoring the operation and registration of users and technical means
  • Identification of existing hardware, users and files
  • Protection of computer operating resources and user programs
  • Services for various data processing modes
  • Destruction of data after its use in system elements
  • Alarm in case of violations
  • Additional programs for other purposes

The areas of software protection are divided into Data Protection (preserving integrity/confidentiality) and Program Protection (implementation of the quality of information processing, which is a trade secret, most vulnerable to an attacker). Identification of files and hardware is implemented in software; the algorithm is based on inspection registration numbers various system components. An excellent method for identifying addressable elements is a request-response type algorithm. To differentiate the requests of different users for different categories of information, individual means of secrecy of resources and personal control of access to them by users are used. If, for example, the same file can be edited by different users, then several options are saved for further analysis.

Protection of information from unauthorized access

To implement intrusion protection, you need to implement the following basic software functions:

  • Identification of objects and subjects
  • Registration and control of actions with programs and actions
  • Restricting access to system resources

Identification procedures involve checking whether the subject who is trying to gain access to resources is who he claims to be. Such checks may be periodic or one-time. For identification, the following methods are often used in such procedures:

  • complex, simple or one-time passwords;
  • badges, keys, tokens;
  • special identifiers for equipment, data, programs;
  • methods for analyzing individual characteristics (voice, fingers, hands, faces).

Practice shows that password protection is a weak link, since in practice it can be eavesdropped or spied on or guessed. For creating complex password, you can read these recommendations. The object to which access is carefully controlled can be a record in a file, the file itself, or a single field in a file record. Typically, many access control tools draw data from the access matrix. You can also approach access control based on the control of information channels and the division of objects and access subjects into classes. A set of software and hardware solutions for data security from digital data is implemented by the following actions:

  • accounting and registration
  • access control
  • sale of funds

You can also note the forms of access control:

  • Access Prevention:
      • to individual sections
      • to the hard drive
      • to catalogs
      • to individual files

    to removable storage media

  • modification protection:
    • catalogs
    • files
  • Setting access privileges to a group of files
  • Copy Prevention:
    • catalogs
    • files
    • user programs
  • Destruction protection:
    • files
    • catalogs
  • Screen dims after a while.

General means of protection against NSD are shown in Fig. 2.

Figure - 2

Copy protection

Copy protection methods prevent the sale of stolen copies of programs. Copy protection methods mean tools that implement program functions only if there is a unique non-copyable element. This may be a part of a computer or application program. Protection is implemented by the following functions:

  • identifying the environment where the program runs
  • authentication of the environment where the program runs
  • Reaction to starting a program from an unauthorized environment
  • Registration of authorized copying

Protecting information from deletion

Data deletion can be carried out during a number of activities such as recovery, backup, updates, etc. Since the events are very diverse, it is difficult to fit them into the rules. It could also be a virus or human factor. And although there is a countermeasure against the virus, these are antiviruses. But there are few counteractions to human actions. To reduce the risks from this, there are a number of actions:

  • Inform all users about the damage to the enterprise if such a threat is realized.
  • Deny receiving/opening software products who are outsiders to the information system.
  • Also run games on those PCs where confidential information is processed.
  • Implement archiving of copies of data and programs.
  • Conduct a check checksums data and programs.
  • Implement information security.

Software tools are objective forms of representing a set of data and commands intended for the operation of computers and computer devices in order to obtain a certain result, as well as materials prepared and recorded on a physical medium obtained during their development, and the audiovisual displays generated by them. These include:

Software (a set of control and processing programs). Compound:

System programs(operating systems, programs Maintenance);

Application programs(programs that are designed to solve problems of a certain type, for example text editors, anti-virus programs, DBMS, etc.);

Instrumental programs (programming systems consisting of programming languages: Turbo C, Microsoft Basic, etc. and translators - a set of programs that provide automatic translation from algorithmic and symbolic languages ​​into machine codes);

Machine information of the owner, owner, user.

I carry out such detailing in order to later more clearly understand the essence of the issue under consideration, in order to more clearly highlight the methods of committing computer crimes, objects and instruments of criminal assault, as well as to eliminate disagreements regarding the terminology of computer equipment. After a detailed examination of the main components that together represent the content of the concept of computer crime, we can move on to consideration of issues related to the main elements of the forensic characteristics of computer crimes.

Security software includes special programs that are designed to perform security functions and are included in the software of data processing systems. Software protection is the most common type of protection, which is facilitated by such positive properties of this product, such as universality, flexibility, ease of implementation, almost unlimited possibilities for change and development, etc. According to their functional purpose, they can be divided into the following groups:

Identification of technical means (terminals, group input-output control devices, computers, storage media), tasks and users;

Determining the rights of technical means (days and hours of operation, tasks allowed for use) and users;

Monitoring the operation of technical equipment and users;

Registration of the operation of technical means and users when processing information of limited use;

Destruction of information in storage after use;

Alarms for unauthorized actions;

Auxiliary programs for various purposes: monitoring the operation of the security mechanism, affixing a secrecy stamp to issued documents.

Antivirus protection

Information security is one of the most important parameters any computer system. To ensure this, a large number of software and hardware tools have been created. Some of them are engaged in encrypting information, and some are engaged in restricting access to data. Computer viruses pose a particular problem. This is a separate class of programs aimed at disrupting the system and damaging data. Among viruses, there are a number of varieties. Some of them are constantly in the computer's memory, some produce destructive actions with one-time "blows". There is also a whole class of programs that look quite decent on the outside, but actually spoil the system. Such programs are called "Trojan horses". One of the main properties of computer viruses is the ability to “reproduce” - i.e. self-distribution within a computer and computer network.

Since various office software applications have been able to work with programs written specifically for them (for example, for Microsoft Office you can write applications in Visual Basic) a new type of malware has appeared - the so-called. MacroViruses. Viruses of this type spread along with regular files documents, and are contained within them as ordinary subroutines.

Not so long ago (this spring) there was an epidemic of the Win95.CIH virus and its numerous subspecies. This virus destroyed the contents Computer BIOS, making her work impossible. Often we even had to throw away motherboards damaged by this virus.

Taking into account the powerful development of communication tools and the sharply increased volumes of data exchange, the problem of virus protection becomes very urgent. Practically, with every document received, for example, by e-mail, a macro virus can be received, and every running program can (theoretically) infect a computer and render the system inoperable.

Therefore, among security systems, the most important area is the fight against viruses. There are a number of tools specifically designed to solve this problem. Some of them run in scanning mode and scan the contents of the computer's hard drives and RAM for viruses. Some must be constantly running and located in the computer's memory. At the same time, they try to monitor all ongoing tasks.

On the Russian software market, the AVP package developed by the Kaspersky Anti-Virus Systems Laboratory has gained the greatest popularity. This is a universal product that has versions for a wide variety of operating systems.

Kaspersky Anti-Virus (AVP) uses all modern types antivirus protection: antivirus scanners, monitors, behavioral blockers and change auditors. Various product versions support all popular operating systems, mail gateways, firewalls, web servers. The system allows you to control everything possible ways penetration of viruses onto the user’s computer, including the Internet, email and mobile storage media. Kaspersky Anti-Virus management tools allow you to automate the most important operations for centralized installation and management, just like on local computer, and in the case of comprehensive protection of an enterprise network. Kaspersky Lab offers three ready-made anti-virus protection solutions designed for the main categories of users. Firstly, anti-virus protection for home users (one license for one computer). Secondly, anti-virus protection for small businesses (up to 50 workstations on the network). Thirdly, anti-virus protection for corporate users (over 50 workstations on the network). Gone are the days when, to be completely sure of safety from “infection”, it was enough not to use “random” floppy disks and run the Aidstest utility on the machine once or twice a week R, which scans your computer's hard drive for suspicious objects. Firstly, the range of areas in which these objects may end up has expanded. E-mail with attached “harmful” files, macro viruses in office (mostly Microsoft Office) documents, “Trojan horses” - all this appeared relatively recently. Secondly, the approach of periodic audits of the hard drive and archives has ceased to justify itself - such checks would have to be carried out too often, and they would take up too many system resources.

Outdated security systems have been replaced by a new generation capable of tracking and neutralizing the “threat” in all critical areas - from email to copying files between disks. At the same time, modern antiviruses organize constant protection - this means that they are constantly in memory and analyze the information being processed.

One of the most well-known and widely used antivirus protection packages is AVP from Kaspersky Lab. This package exists in large quantities various options. Each of them is designed to solve a specific range of security problems and has a number of specific properties.

Protection systems distributed by Kaspersky Lab are divided into three main categories, depending on the types of tasks they solve. These include protection for small businesses, protection for home users and protection for corporate clients.

AntiViral Toolkit Pro includes programs that allow you to protect workstations managed by various operating systems - AVP scanners for DOS, Windows 95/98/NT, Linux, AVP monitors for Windows 95/98/NT, Linux, file servers- AVP monitor and scanner for Novell Netware, monitor and scanner for NT server, WEB server - AVP Inspector disk inspector for Windows, mail servers Microsoft Exchange- AVP for Microsoft Exchange and gateways.

AntiViral Toolkit Pro includes scanner programs and monitor programs. Monitors allow you to organize more full control, necessary in the most critical areas of the network.

IN Windows networks 95/98/NT AntiViral Toolkit Pro allows, using the AVP Network Control Center software package, centralized administration of the entire logical network from the administrator’s workstation.

The AVP concept allows you to easily and regularly update anti-virus programs by replacing anti-virus databases - a set of files with the .AVC extension, which today allow you to detect and remove more than 50,000 viruses. Updates to anti-virus databases are released and available from the Kaspersky Lab server daily. On this moment plastic bag antivirus programs AntiViral Toolkit Pro (AVP) has one of the world's largest antivirus databases.


Related information.



Top