Documenting processes. Monitoring in corporate networks Make an analysis of the communication equipment monitoring system practice

Introduction

In recent years, information technology has undergone significant and constant changes. According to some estimates, over the past five years, the volume of LAN traffic has increased tenfold. Thus, local area networks must provide increasing bandwidth and the required level of quality of service. However, whatever resources the network has, they are finite, so the network needs the ability to manage traffic.

And for management to be as effective as possible, you need to be able to control the packets that pass between devices on your network. Also, the administrator has a great variety of mandatory daily operations. These include, for example, checking the correct functioning of e-mail, reviewing log files for early signs of malfunctions, monitoring the connection of local networks and monitoring the availability of system resources. And this is where the tools used for monitoring and analysis can come to the rescue. computer networks.

In order not to get confused in the variety of methods, tools and products created for monitoring, let's start with a brief description of several large classes of these products.

Network Management Systems. It is centralized software systems that collect data on the state of nodes and communication devices on the network, as well as on the traffic circulating in the network. These systems not only monitor and analyze the network, but also perform network management actions in automatic or semi-automatic mode - enabling and disabling device ports, changing the parameters of bridges in the address tables of bridges, switches and routers, etc. Examples of control systems are the popular systems HPOpenView, SunNetManager, IBMNetView.

System Management Tools. System controls often perform functions similar to control systems, but in relation to other objects. In the first case, the control object is software and Hardware network computers, and in the second - communication equipment. However, some of the functions of these two types of control systems can be duplicated, for example, system controls can perform the simplest analysis of network traffic.

Embedded systems of diagnostics and management (Embedded systems). These systems are implemented in the form of software and hardware modules installed in communication equipment, as well as in the form of software modules built into operating systems. They perform diagnostic and control functions for only one device, and this is their main difference from centralized control systems. An example of this class of tools is the Distrebuted 5000 hub management module, which implements the functions of auto-segmentation of ports upon detection of faults, assigning ports to internal segments of the hub, and some others. As a rule, built-in management modules "part-time" act as SNMP agents that provide device status data for management systems.

Protocol analyzers. They are software or hardware-software systems that are limited, in contrast to control systems, only by the functions of monitoring and analyzing traffic in networks. A good protocol analyzer can capture and decode packets a large number protocols used in networks - usually several dozen. Protocol analyzers allow you to set some logical conditions for capturing individual packets and perform complete decoding of captured packets, that is, they show, in a convenient form for a specialist, the nesting of protocol packets of different levels into each other with decoding of the content of individual fields of each packet.

Expert systems. Systems of this type accumulate human knowledge about identifying the causes of abnormal operation of networks and possible ways bringing the network to a working state. Expert systems are often implemented as separate subsystems of various network monitoring and analysis tools: network management systems, protocol analyzers, network analyzers. The simplest variant of an expert system is a context-sensitive help-system. More complex expert systems are so-called knowledge bases with elements artificial intelligence... An example of such a system is the expert system built into the Cabletron Spectrum control system.

Multifunctional devices for analysis and diagnostics. In recent years, in connection with the ubiquity of local networks, it became necessary to develop inexpensive portable devices that combine the functions of several devices: protocol analyzers, cable scanners, and even some software capabilities. network management... Compas from Microtest, Inc. are examples of this type of device. or 675 LANMeter from FlukeCorp.

Control systems

Recently, in the field of control systems, two fairly clearly expressed trends have been observed:

  1. Integration of network and systems management functions into one product. (The undoubted advantage of this approach is a single point of system control. The disadvantage is that with a heavy load on the network, a server with installed program monitoring may not be able to cope with the processing of all packets and, depending on the product, either ignore some of the packets, or become a bottleneck in the system.).
  2. distribution of the control system, in which there are several consoles in the system that collect information about the state of devices and systems and issue control actions. (Here, the opposite is true: monitoring tasks are distributed among several devices, but duplication of the same functions and inconsistency between the control actions of different consoles are possible.)

Often, control systems perform not only the functions of monitoring and analyzing the operation of the network, but also include the functions of actively influencing the network - configuration and security management (see sidebar).

SNMP network management protocol

Most networking and management professionals love the concept of standards. This is understandable, because the standards allow them to choose a supplier of network products based on criteria such as level of service, price and performance of the product, instead of being "chained" to the proprietary solution of one manufacturer. The largest network today, the Internet, is standards-based. The Internet Engineering Task Force (IETF) was formed to coordinate development efforts for this and other TCP / IP networks.

The most common network management protocol is SNMP (SimpleNetwork Management Protocol), which is supported by hundreds of manufacturers. The main advantages of the SNMP protocol are simplicity, availability, independence from manufacturers. SNMP is designed to manage routers on the Internet and is part of the TCP / IP stack.

What is MIB - Man In Black?

If we are talking about tools for monitoring a corporate network, then this abbreviation hides the term Management Information Base. What is this database for?

SNMP is a protocol used to obtain information about their status, performance and characteristics from network devices, which are stored in a special database of network devices called the MIB. There are standards that define the structure of the MIB, including the set of types of its variables (objects in ISO terminology), their names and the allowed operations with these variables (for example, read). Along with other information, the MIB can store the network and / or MAC addresses of devices, the values ​​of the counters of processed packets and errors, numbers, priorities and information about the state of ports. The MIB tree structure contains the required (standard) subtrees; in addition, it can contain private subtrees that allow the smart device manufacturer to implement any specific functionality based on its specific variables.

An SNMP agent is a processing element that provides managers located at management stations on a network with access to the values ​​of MIB variables and thus enables them to implement device management and monitoring functions.

A useful addition to functionality SNMP is an RMON specification that allows remote communication with a MIB. Prior to RMON, SNMP could not be used remotely, it only allowed local management of devices. However, RMON works best on shared networks, where it is able to monitor all traffic. But if there is a switch on the network that filters traffic in such a way that it becomes invisible to the port, if it is not intended for the device associated with this port, or does not come from this device, then your probe data will suffer.

To avoid this, manufacturers have provided some RMON functionality to each port on the switch. It is more scalable than polling all ports on a switch.

Protocol Analyzers

When designing a new network or upgrading an old network, it is often necessary to quantitative measurement some characteristics of the network, such as, for example, the intensity of data streams over network communication lines, delays occurring at various stages of packet processing, the response time to requests of one type or another, the frequency of occurrence of certain events, etc.

In this difficult situation, you can use different tools and, first of all, monitoring tools in network management systems, which have already been discussed in the previous sections of the article. Some measurements on the network can also be performed by software meters built into the operating system, an example of which is the OS component NTPerformanceMonitor. This utility was developed to capture computer activity in real time. It can help you identify most of the bottlenecks that hinder productivity.

At the heart of PerformanceMonitor is a number of counters that record such characteristics as the number of processes waiting to complete a disk operation, the number of network packets transmitted per unit of time, the percentage of processor utilization, etc.

But the most advanced network exploration tool is a protocol analyzer. The protocol analysis process involves capturing and examining the packets that are circulating in the network that implement a particular network protocol. Based on the results of the analysis, you can make informed and balanced changes to any components of the network, optimize its performance, and troubleshoot. Obviously, in order to be able to draw any conclusions about the impact of some change on the network, it is necessary to analyze the protocols before and after the change.

Usually, the process of analyzing protocols takes quite a long time (up to several working days) and includes the following steps:

  1. Capturing data.
  2. View captured data.
  3. Data analysis.
  4. Search for errors.
  5. Performance research. Calculation of the utilization rate of the network bandwidth or the average response time to a request.
  6. A detailed study of individual sections of the network. The content of the work at this stage depends on the results obtained from the analysis of the network.

This concludes the consideration of the theoretical points that must be taken into account when building a monitoring system for your network, and proceed to consideration software products, created to analyze the work of the corporate network and control it.

Monitoring and Analysis Products

Comparative overview of control systems HPOpenView and CabletronSpectrum

Each suite of applications reviewed in this section breaks down network management into approximately four areas. The first is the integration of the kit into the overall network management infrastructure, which implies support for various types of devices from the same manufacturer.

The next functional area is the means for configuring and managing individual network devices such as a hub, switch or probe.

The third area is global management tools that are already responsible for grouping devices and organizing connections between them, for example, applications for generating a network topology diagram.

The topic of this article is the fourth functional area, traffic monitoring. Although VLAN configuration tools and global management are important aspects of network administration, it is generally impractical to implement formal network management procedures on a single Ethernet network. It is enough to thoroughly test the network after installation and check the load level from time to time.

A good platform for corporate network management systems must have the following qualities:

  • scalability;
  • true distribution in accordance with the client / server concept;
  • openness to cope with disparate hardware from desktops to mainframes.

The first two properties are closely related. Good scalability is achieved due to the distributed control system. Distributed here means that a system can include multiple servers and clients.

Support for dissimilar equipment is a desirable rather than a real-life feature of today's control systems. We will look at two popular network management products: Spectrum from CabletronSystems and OpenView from Hewlett-Packard. Both of these companies make their own communications equipment. Naturally, Spectrum manages Cabletron equipment best, and OpenView controls Hewlett-Packard equipment best.

If the network map is built of equipment from other manufacturers, these systems begin to make mistakes and take some devices for others, and when managing these devices, they support only their basic functions, and many useful additional functions that differ this device from the rest, the control system simply does not understand and therefore cannot use them.

To avoid this situation, the developers of control systems include support not only for the standard MIBI, MIBII and RMONMIB, but also for numerous private MIB manufacturers. The leader in this area is Spectrum, which supports over 1000 MIBs from various vendors.

However, the undoubted advantage of OpenView is its ability to recognize network technologies of any network operating over TCP / IP. At Spectrum, this ability is limited to Ethernet, TokenRing, FDDI, ATM, distributed networks, switched networks. As the number of devices in the network increases, Spectrum becomes more scalable, where the number of serviced nodes is not limited by anything.

It is obvious that, despite the presence of weaknesses and strengths in both systems, if the network is dominated by equipment from one manufacturer, the availability of management applications from this manufacturer for any popular management platform allows network administrators to successfully solve many problems. Therefore, the developers of control platforms ship with them tools that simplify the development of applications, and the availability of such applications and their number is considered a very important factor in choosing a control platform.

Systems for broad class networks

This sector of low-cost systems for not very critical to network failures, it includes FoundationAgentMulti-Port, Foundation Probe, Foundation Manager from NetworkGeneral. They are a complete RMON-based network monitoring system and include two types of monitor agents, FoundationAgent and FoundationProbe, and the FoundationManager operator console.

FoundationAgentMulti-Port supports all the capabilities of a standard SNMP agent and an advanced data collection and filtering system, and also allows you to collect information from Ethernet or TokenRing segments using a single computer.

FoundationProbe is a Certified PC with a Certified Network Card and the corresponding type of FoundationAgent software preinstalled. FoundationAgent and FoundationProbe typically operate in monitorless and keyboardless mode because they are controlled by the FoundationManager software.

FoundationManager Console software comes in two flavors - for Windows systems and for UNIX.

The FoundationManager console allows you to graphically display statistics for all monitored network segments, automatically determine the average network parameters and respond to exceeding the permissible parameter limits (for example, launch a handler program, initiate SNMP-trap and SNA-alarm), build a graphical dynamic traffic map between stations.

Distributed network systems

This is the sector of expensive, high-end systems designed for network analysis and monitoring with the highest possible requirements for reliability and performance. It includes the DistributedSnifferSystem (DSS) product, which is a system consisting of several hardware and software components distributed over a network to continuously analyze all, including remote, network segments.

The DSS system is built from two types of components - SnifferServer (SS) and SniffMasterConsole (SM). Ethernet, TokenRing or serial cards can be used as interfaces for interaction with the console. Thus, it is possible to control a segment of almost any network topology and use various environments for interaction with the console, including modem connections.

SnifferServer software consists of three subsystems - monitoring, protocol interpretation and expert analysis. The monitoring subsystem is a system for displaying the current state of the network, which makes it possible to obtain statistics for each of the stations and network segments for each of the protocols used. The other two subsystems deserve a separate discussion.

The functions of the protocol interpretation subsystem include analysis of captured packets and the fullest possible interpretation of each of the packet header fields and its contents. NetworkGeneral has created the most powerful subsystem of this type - ProtocolInterpreter is able to fully decode more than 200 protocols of all seven layers of the ISO / OSI model (TCP / IP, IPX / SPX, NCP, DECnetSunNFS, X-Windows, SNAIBM protocol family, AppleTalk, BanyanVINES, OSI, XNS, X.25, various internetworking protocols). In this case, the display of information is possible in one of three modes - general, detailed and hexadecimal.

The main purpose of the expert analysis system (ExpertAnalysis) is to reduce network downtime and eliminate network bottlenecks by automatically identifying anomalous phenomena and automatically generating methods for their resolution.

ExpertAnalysis provides what NetworkGeneral calls proactive analysis. To understand this concept, let us consider the processing of the same erroneous event in the network by traditional passive analysis systems and an active analysis system.

Let's say there is a broadcast storm on the network at 3:00 am that causes the database backup system to crash at 3:05 am. By 4:00 the storm stops and the parameters of the system return to normal. In the case of a passive traffic analysis system working in the network, administrators who came to work by 8:00 have nothing to analyze except information about the second failure and, at best, general statistics on traffic overnight - the size of any capture buffer will not allow storing all traffic passed over the network overnight. The likelihood of eliminating the cause that led to the broadcast storm is extremely small in such a situation.

Now let's consider the reaction to the same events of the active analysis system. At 3:00, right after the start of the broadcast storm, the active analysis system detects the onset of a non-standard situation, activates the appropriate expert and records the information provided by it about the event and its causes in the database. At 3:05 am, a new non-standard situation associated with a failure of the archiving system is recorded, and the corresponding information is recorded. As a result, at 8:00 am, administrators receive a full description of the problems encountered, their causes and recommendations for eliminating these causes.

Portable analysis and monitoring systems

A portable version of the analyzer, almost similar in its capabilities to DSS, is implemented in the ExpertSnifferAnalyzer (ESA) series, also known as TurboSnifferAnalyzer. At a significantly lower cost than the DSS series, ESA provides the administrator with the same capabilities as full-blown DSS, but only for the network segment to which the ESA is connected in this moment... The existing versions provide complete analysis, protocol interpretation, and monitoring of the connected network segment or inter-segment communication line. This maintains the same network topologies as for DSS systems. Typically, ESAs are used to periodically check non-critical network segments on which it is impractical to constantly use an analyzer agent.

Novell LANalyser Protocol Analyzer

LANalyser is supplied as a network card and software that must be installed on a personal computer, or as a PC with the card and software already installed.

LANalyser has a developed user-friendly user interface, with which the selected operating mode is set. The ApplicationLANalyser menu is the main tool for configuring the capture mode and offers a choice of protocol stacks, filters, initiators, alarms, and more. This analyzer can work with NetBIOS, SMB, NCP, NCPBurst, TCP / IP, DECnet, BanyanVINES, AppleTalk, XNS, SunNFS, ISO, EGP, NIS, SNA and some others.

In addition, LANalyser includes an expert system to assist the user in troubleshooting.

Conclusion

All of the above systems are definitely required in the network. large corporation, however, are too cumbersome for organizations in which the number of network users does not exceed 200-300 people. Half of the system's functions will remain unclaimed, and the distribution bill will scare the chief accountant and the head of the company. Moreover, control over hardware faults and "bottlenecks" of the system in a small network in most cases is quite within the power of one or two administrators and does not need automation.

Nevertheless, in a network of any scale, in our opinion, there should be a network analysis system in one form or another, thanks to which it will be much easier for an administrator to manage his economy.

ComputerPress 7 "2001

GOST R 55681-2013 / ISO / TR 26122: 2008 *
_______________
* Amendment (IUS N 2-2015).

Group T62


NATIONAL STANDARD OF THE RUSSIAN FEDERATION

Information and documentation

Analysis of work processes from the point of view of document management

Information and documentation. Work process analysis for records

Introduction date 2014-09-01

Foreword

1 PREPARED by Limited Liability Company "Electronic Office Systems (design and implementation)" on the basis of its own authentic translation into Russian of the document specified in clause 4

2 INTRODUCED by the Technical Committee for Standardization TC 459 "Information support of the product life cycle"

3 APPROVED AND PUT INTO EFFECT by the Order of the Federal Agency for Technical Regulation and Metrology of October 31, 2013 N 1303-st

4 This International Standard is identical to technical report ISO / TR 26122-2008 * "Information and documentation - Work process analysis for records" (ISO / TR 26122-2008 "Information and documentation - Work process analysis for records", IDT)
________________
* Access to international and foreign documents mentioned in the text can be obtained by contacting the User Support Service. - Note from the manufacturer of the database.

(Amendment. IUS N 2-2015).

When applying this standard, it is recommended to use, instead of the reference international standards, the corresponding national standards of the Russian Federation, information about which is given in the additional appendix YES *
_______________
* The text of the document corresponds to the original. Appendix YES is not included in the paper original. - Note from the manufacturer of the database.

5 INTRODUCED FOR THE FIRST TIME


The rules for the application of this standard are set out in GOST R 1.0-2012 (section 8). Information on changes to this standard is published in the annual (as of January 1 of the current year) information index "National Standards", and the official text of changes and amendments is published in the monthly information index "National Standards". In case of revision (replacement) or cancellation of this standard, the corresponding notice will be published in the next issue of the information index "National Standards". Relevant information, notice and texts are also posted in information system general use - on the official website of the Federal Agency for Technical Regulation and Metrology on the Internet (gost.ru)


Amendment published in IUS N 2, 2015

Corrected by the manufacturer of the database

Introduction

Introduction

All organizations, regardless of their size and the nature of their business activities, exist and operate to achieve specific goals and objectives. In order to accomplish its specific tasks, each organization establishes and implements appropriate work processes, which together form the organization's business activities.

Each organization creates documents in the course of work processes. These documents are evidence confirming the goals and objectives of the organization, its decisions and actions. To fully understand the nature of such "business documents", it is necessary to understand the workflows in which these documents were created. An understanding of workflows can also be used to determine which documents to create during workflows and to manage those documents over time as an organization's assets.

An analysis of workflows from a document management perspective is carried out to determine the requirements for creating, capturing and managing documents. It describes and analyzes what happens when certain business functions are performed in a specific business context. Such an analysis cannot be carried out in a purely abstract manner, and its results depend on the accuracy of the information collected and on a thorough understanding of the context of the organization's activities and its mission.

This standard is intended to:

- for document management specialists (or persons entrusted with document management in an organization) responsible for the creation and management of documents both in business systems and in specialized software applications for document management;

- for system / business analysts responsible for the design of business processes and / or systems within which documents will be created or managed.

For the purposes of this International Standard, workflow analysis includes:

a) identifying the relationships between work processes and their business context;

b) identifying the relationships between work processes and the rules governing their application (as determined by the relevant regulatory environment);

c) the hierarchical decomposition of work processes into components or sub-parts; and

d) identification of relationships (sequential interdependence) between individual work processes and / or individual transactions, incl. their sequence.

Analyzing workflows in order to create and manage documents allows you to:

- to provide a clear identification of requirements for the creation of documents, contributing to the automation of capture and management of documents as work progresses; and

- determine the relationships between documents following from the business context, contributing to their logical organization and grouping and thereby ensuring, based on knowledge of business activities, clear documentation of work processes, as well as simplifying the search, storage within specified time frames and destruction of documents or their transfer to archival storage.

Workflow analysis facilitates the integration of document capture into these processes as work progresses. Examples of workflows in which document creation is typically integrated with transaction processing are order and invoice processing, payroll, asset management, inventory management, quality management system operation, and contract management. The integration of document processes into automation protocols applied to work processes enables the systematic creation, capture, and management of an organization's documents in relevant business systems.

1 area of ​​use

a) functional analysis (decomposition of business functions into processes), and

b) work flow analysis (study of transaction flows).

Each type of analysis implies a preliminary study of the relevant context (i.e. the goals and objectives of the organization, the legal and regulatory environment). Individual elements of the analysis can be performed in various combinations and in an order that differs from that described in the standard - depending on the characteristics of the task, the scope of the project and the purpose of the analysis being carried out. Recommendations and directions are given in the form of lists of questions / topics to be considered when using the relevant analysis element.

This International Standard describes the practical application of the theoretical provisions contained in GOST R ISO 15489-1. The standard itself is technology-neutral (i.e., it can be applied regardless of the specifics of the technology environment), although it can also be used to assess the adequacy of technology tools that support the organization's work processes.

In according to a set of procedural rules, see).

2 Normative references

The referenced documents listed below are indispensable in the application of this International Standard. For references where the date is indicated, only the cited version of the document applies. For undated references, the latest version of the relevant document (including any amendments) should be used.

ISO 15489-1 Information and documentation - Control of documents - Part 1: General principles(ISO 15489-1, Information and documentation - Records management - Part 1: General).

ISO / TR 15489-2, Information and documentation - Records management - Part 2: Guidelines.

ISO / TS 23081-1 Information and documentation - Records management processes - Metadata for records - Part 1: Principles.

ISO / TS 23081-2: 2007 Information and documentation - Records management processes - Metadata for records - Part 2: Conceptual and implementation issues (ISO / TS 23081-2: 2007, Information and documentation - Records management processes - Metadata for records - Part 2: Conceptual and implementation issues).

Note - When using this standard, it is advisable to check the validity of reference standards in the public information system - on the official website of the Federal Agency for Technical Regulation and Metrology on the Internet or according to the annual information index "National Standards", which was published as of January 1 of the current year, and by the editions of the monthly information index "National Standards" for the current year. If the referenced standard to which an undated reference is given has been replaced, it is recommended that the current version of that standard be used, taking into account all this version changes. If the referenced standard to which the dated reference is given is replaced, then it is recommended to use the version of that standard with the above year of approval (acceptance). If, after the approval of this standard, a change is made to the referenced standard to which the dated reference is given, affecting the provision to which the reference is made, then that provision is recommended to be applied without taking into account that change. If the reference standard is canceled without replacement, then the provision in which the reference to it is given is recommended to be applied in the part that does not affect this reference.

3 Terms and definitions

For the purposes of this document, the terms and definitions given in ISO 15489-1 and ISO / TR 15489-2, ISO 23081-1 and ISO 23081-2, and the following are used:

3.1 documentation: A collection of documents describing operations, instructions, decisions, procedures, and business rules that are specific to a specific business function, process, or transaction.

3.2 functional analysis: The grouping of processes according to the specific strategic objectives of the organization for which they are performed, revealing the relationships between business functions, processes and transactions that affect document management.

3.3 sequence of transactions: A series of transactions linked together by the requirement that the execution of a subsequent transaction is dependent on the completion of the previous transaction.

3.4 sequential analysis: Decomposition of the workflow in a linear and / or chronological sequence, performed during sequential analysis, revealing the interdependencies between the transactions that form the workflow.

3.5 transaction: The smallest workflow element representing the exchange of information and / or other resources (exchange) between two or more participants or systems.

3.6 the working process: A workflow is one or more sequences of transactions required to achieve an end result that conforms to established rules.

4 Conducting an analysis of work processes

4.1 General

Workflow analysis from a document management perspective is used to collect information about the transactions, processes, and functions of an organization in order to determine the requirements for creating, capturing and managing documents.

Two methods are used to analyze workflows:

- functional;

- consistent.

Before choosing any of the analysis methods or a combination of them, it is necessary to determine the purpose of the workflow analysis project, its scope and scope, as well as the organizational context of the analyzed activity (context analysis, see 5).

4.2 Documentation aspect of workflow analysis

Workflow analysis is the foundation required for the following processes used to create, capture and manage documents:

a) defining requirements for documenting the business function or other set of processes;

b) the development of functional (business function-based) classification schemes for the purpose of identifying, localizing and interlinking related documents;

c) maintaining the link between documents and the context of their creation;

d) developing naming and indexing rules and conventions to ensure and maintain the identification of documents over time;

e) identification in time of the owners of the documents;

f) establishing proper retention periods for documents and developing guidelines for retention periods and actions to be taken after their expiration

g) analysis of risk management in the context of document management systems;

h) determining appropriate security controls for documents and establishing access rights and security levels.

4.3 Scope and scope of workflow analysis

The two methods of analysis can be used in different combinations, and their application can be scaled depending on the scope of the problem being solved. The scope of the analysis can vary according to the task at hand, ranging from a comprehensive identification and analysis of all business functions of an organization to micro-level analysis of a specific process in a particular business unit. The scope and level of detail will depend on the organization's risk assessment and the purpose of the document management problem being addressed.

In functional analysis, a top-down analytical method is used, when research begins with the goals and strategies of the organization and can go down to the level of transactional analysis. This method the analysis can be applied both to several organizations (operating within one or more jurisdictions) and are responsible for a business function, and for one organization or for one of its divisions.

The application of sequential analysis can scale to analyze processes across an organization, across organizations (operating within one or more jurisdictions), in a large business unit, or in a single business unit. This method, depending on the objectives of the analysis, can be used both for analyzing a set of processes and for analyzing transactions that make up a single process - up to individual keystrokes.

For the purposes of this International Standard, the hierarchy of concepts shown in Table 1 applies.


Table 1 Hierarchy of concepts

A source

Example 1 (at the university)

Example 2 (in medical practice)

Function

ISO / TS 23081-2: 2007

Scientific research

Patient care

Set of processes

This standard

Research funding

Examination, diagnosis and treatment of patients

ISO / TS 23081-2: 2007

Approval of Research Grant Applications

Patient examination

Transaction

ISO / TS 23081-2: 2007

Applying for a grant

Dispensing a prescription to a patient


NOTE - Many jurisdictions analyze business functions using their own terms to denote logical levels. In some cases, a jurisdiction or organization may choose to allocate different or additional layers when decomposing business functions to transactions. Both the number of levels and their position in the hierarchy depend on the practice in the particular jurisdiction and the scope and size of the workflow analysis project itself. Terms such as "sub-function", "activity" and "action" may be used that are not used in this technical report (in part to simplify its practical application).


The emphasis on the use of functional analysis will be placed in the development of a functional classification scheme for the entire organization, especially when defining the higher levels of this scheme. The emphasis on the use of sequential analysis will be made when dealing with issues related to the creation, capture and management of documents within the same process or in the same business unit of the organization.

When conducting a workflow analysis for a specific project, the following questions should be asked: Does the project cover the workflow analysis from a document management perspective:

a) a separate transaction within a process?

b) a separate process within the business unit?

c) multiple interrelated processes (set of processes) in a structural unit of the organization?

d) the overall business function, how is it performed within one or more organizations?

f) functional analysis of the organization as a whole?

4.4 Participants in work processes and verification of the accuracy of the information collected

The analysis of workflows for the creation, capture and management of documents is very specific. A description and analysis of the processes occurring in the organization in real time is carried out, and the quality of the analysis depends on the accuracy of the information collected. Workflow participants are a key source of this information and also play important role as a person to contact to verify the accuracy of the information collected.

Learning about the role of participants in the process (for example, based on job descriptions) also facilitates the analysis of work processes. The nature of their involvement (eg advice and guidance, authorization, processing, evaluation, audit) can indicate the steps in the process as well as the points at which these steps are performed.

Verification of the accuracy of the information collected has key value for the successful implementation of workflow analysis, acceptance of the analysis results and for cooperation in the practical implementation of the proposed recommendations. The outcome of the review is contingent on confirmation by workflow participants that the analysis results are complete, accurate, and reliable.

4.5 Liability

The head of an organization is responsible for the effectiveness of the organization and for the way the organization conducts its business and carries out its work processes.

Responsibility for the documents created in the course of work processes is primarily borne by managers and managers of the appropriate level (manager), who have been delegated operational powers and responsibility for the business activities carried out. Adequate documentation is paramount for managers to fulfill their responsibilities for accountability, risk management and monitoring.

It is your responsibility for the documents that are generated during a particular workflow to document the business rules, procedures, and guidelines that govern the process. Relevant managers are responsible for maintaining and updating documentation of business-specific business rules and procedures. They are also responsible for establishing procedures to ensure that analysis results are kept up to date in the event of significant changes in work processes.

People in an organization have different roles and responsibilities at different times. Changes in roles and responsibilities should be monitored as part of the contextual information needed to ensure that documents arising from the work processes performed by these individuals remain relevant.

5 Context analysis

5.1 General

Any workflow analysis should begin with an analysis of the context - the conditions in which the organization conducts its business - i.e. by examining the regulatory environment and organizational conditions (organizational context) within which work processes are carried out.

NOTE Additional guidance on conducting context analysis can be found in ISO 15489-1: 2001 clauses 5, 8.4 a) -c) and in ISO / TR 15489-2: 2001 clause 3.2.


The legal and regulatory environment in which an organization operates consists of international and national legislation affecting the way the organization conducts its business, business rules, mandatory and voluntary standards, agreements, practices, public expectations, etc. that the organization must correspond. The hierarchy of elements of the regulatory environment considered in the course of its analysis includes:

a) codified and case law governing both general business activities and the activities of a specific industry;

b) binding industry standards;

(c) voluntary standards and codes of good business practice;

d) codes of conduct and ethics;

e) identifiable public expectations;

f) departmental or organization policies, and

g) the rules and procedures of the organization.

For public sector organizations, the expectations for the functions and processes performed by a particular organization are established by law or policy. For non-governmental sector organizations, relevant expectations can be formulated in a business prospectus, mission statement or constitution, which sets out the goals for which the organization was created.

Organizational context analysis identifies workflows in one or more organizations. It provides the ability to define the architecture of a function or process (eg, centralized or decentralized) and accountability for the efficient execution of the function and processes. Such an analysis reveals the principles of distribution of functions, processes and individual transactions among organizational units, as well as the principles for determining the relationships between them. The accuracy of the results of the analysis of the organizational context is achieved through the application of functional and sequential methods of analysis (see 6 and 7).

The context analysis performed during workflow analysis should provide, at the highest level, an accurate view of the regulatory environment and organizational context that authorizes the use of the workflow. If the scope of workflow analysis is limited to the framework of a separate process, then when analyzing the context, it is sufficient to study only the specific policies, procedures and rules governing this particular process. If, on the other hand, workflow analysis covers the entire business function, then the context analysis should examine all elements of the relevant regulatory environment and organizational context.

Table 2 lists a number of questions to be answered when conducting a context analysis.


Table 2 Context Analysis

Which laws, regulations and / or mission statements directly govern the workflow in question?

What other regulatory requirements affect the business function or process?

Are there mandatory standards or rules that the business function or process must comply with?

Are there rules, codes of practice or conduct established by the organization that are relevant to the business function or process (s)?

What specific procedures govern the process (s)?

What public expectations might affect the business function or process (s)?

How are processes localized within an organization (i.e. centralized or decentralized, performed within one organization or spanning multiple organizations, performed in one or more jurisdictions)?

To whom is the manager responsible for the process (s) accountable, and for what are the main results?

Who in the organization (s) participates in the process (s) and where are these participants located?

5.2 Results of the context analysis

Identified and documented key elements of the regulatory environment and organizational context in relation to the analyzed workflow. The information collected serves as the basis for a functional and consistent analysis.

6 Functional analysis

6.1 General

Business functions are distinguished according to the goals of the organization. Business functions can be defined as processes grouped together to achieve specific strategic goals. Business functions should generally be mutually exclusive categories and should only be considered once in a review process, even when their constituent processes may be performed across multiple parts of an organization.

NOTE - The internal structure of business functions can contain several hierarchical levels, depending on how the business function is broken down in the jurisdiction or organization. These levels can be called "sub-functions", "activities", "actions", etc., but in the present technical they are collectively referred to as "aggregates of processes. ).


In functional analysis, a top-down analytical method is used, when the study begins with the establishment of the goals and strategies of the organization, then the programs, projects and processes used to achieve them are determined; and decomposition of these programs, projects and processes is performed to a level that makes it possible to identify the relationship between them.

It is recommended to conduct functional analysis without regard to the structure of the organization, since the business function can be performed in several parts of the organization and / or in several organizations.

6.2 Business function analysis

6.2.1 The main stages of functional analysis

The main stages of the functional analysis are:

a) Establishing the goals and strategies of the organization.

The setting of the organization's goals and strategies is usually based on the results of the context analysis and bylaws of the organization, on its public reports (annual reports, strategic planning documents, annual financial statements), internal planning and budgeting documents such as the corporate plan (see 5). Any documentation available that provides an analysis of the organization's business functions should be considered.

b) Determination of the organization's business functions used to achieve these objectives.

Business functions are distinguished by grouping processes to achieve each specific goal. Defining the business functions of an organization is a two-way process in which a top-down analysis of the organization's goals is conducted, work processes are examined and analyzed, which are then grouped according to the goals and strategies of the organization.

c) Identifying the organizational processes that constitute these business functions.

When conducting a functional review of the organization as a whole, all processes should be considered. Unlike business functions, processes can be re-examined as the analysis progresses, both because the same processes can be performed in different parts of the organization or in several organizations, and because different business functions processes of the same type are present.

So, for example, planning, budget preparation, management of project documents and information, implementation and subsequent project evaluation are typical ( generic) processes that are encountered in the analysis of most business projects related to various business functions. These generic processes differ from each other in their specific business context or functional associations (cf., for example, the HR-planning process with the financial management-planning process).

The processes that are specific to a particular business function are described using similarly descriptive terms, such as "renting property" (at a rental agency) or "employment" (at an employment agency).

In gathering information and analyzing processes, sequential analysis can be used to identify the constituent transactions of the processes.

Analysis of all the elements that make up the processes to identify the constituent parts of each transaction process.

For detailed analysis of the information and resources required to carry out transactions, the sequential analysis method is usually used (see 7).

The depth of functional analysis depends on the task at hand. For example, for the purposes of classification or the final decision on the fate of documents, the analysis should identify all the individual processes that make up the business function in question. For the purposes of document management, it is necessary to go down to the level of individual transactions or to the point at which documents are created.

Table 3 provides a list of questions to be answered when identifying business functions, processes, and transactions.


Table 3 Identification of business functions, processes and transactions

What are the operational business functions of the organization (those that address the unique challenges of the organization)?

What administrative business functions of the organization support the execution of the operational business functions?

How are the operational and administrative business functions related to each other and to the structure of the organization?

Who is involved in the operational and administrative business functions, and where are these participants in the organization?

Is a business function or significant group of processes performed by multiple organizations operating in one or more jurisdictions?

Has the business function or significant group of processes been outsourced to another organization?

What are the core processes that make up each of the operational and administrative business functions?

How are these processes interrelated?

Which transactions form each of the processes?

6.2.2 Results of the business function analysis

In the interests of creating a functional classification scheme or the selection of arrays of documents to be destroyed or transferred for permanent archival storage, a representative model of the organization's processes has been developed and documented, reflecting both hierarchical relationships between processes and business functions, and the relationships between processes.

To support the creation of a thesaurus, naming conventions, or indexing rules, documentation has been prepared that describes the hierarchy of business functions, processes, and transactions.

7 sequential analysis

7.1 General

7.1.1 The sequential analysis identifies and describes (map) the sequences of transactions within the business process and their relationship / dependency on other processes. Sequential analysis seeks to account for each step of the workflow and usually provides a chronological sequence of these steps. The basis of sequential analysis is to establish what happens during the execution of the process under consideration. The purpose of the process description (mapping) is to determine the sequence of steps, i.e. what must be achieved at each step before the next transaction can be performed.

If the workflow is carried out using several simultaneously executed sequences of transactions (parallel processes), then sequential analysis brings them back to a logical sequence at the point where they converge. In the event that several sequences of transactions are executed during the process, the analysis should determine the point at which several sequences of transactions converge, as well as those sequences that need to be completed before other sequences can begin to execute. Each component of a transaction should be highlighted as a separate step.

Sequential analysis is applied at a lower level than functional analysis, i.e. at the transaction level. A consistent analysis is carried out taking into account the specifics of a specific workplace and time (workplace-and time-specific).

Consistent analysis of work processes:

a) establishes the usual order of execution of the process,

b) identifies the most common variations, and

c) identifies other variations (less common or exceptions) requiring non-standard (unusual / irregular) intervention.

In a sequential analysis of persistent workflows, existing historical sequences of transactions are compared against the requirements identified in the context analysis. Consistent analysis, in the interest of designing new workflows, makes it possible to document transactions in conjunction with appropriate contextual rules.

7.1.2 Sequential analysis can be used to examine the workflows that create documents that are then put into correspondence cases or case files and dossiers. The analysis can cover the processing of these documents and the processes of developing patterns and typical routes used to solve problems. The results of such analysis can be useful for the development of office automation systems, using, for example, workflow processes that integrate document management with the execution of business tasks. Therefore, a sequential analysis should:

a) identify trigger events that trigger the creation of transaction documents;

b) establish a link between transactions and those who authorize them and / or documents (for example, with authorized officials of the organization and / or regulations such as laws, regulations, policies);

c) determine what data about transactions performed during the workflow is created, modified and stored, and

d) define the content (content) and metadata of documents required to document the completed transactions.

7.1.3 The main elements of sequential analysis are:

a) identifying the sequence of transactions during the workflow;

b) identifying and analyzing variations in processes,

c) establishing rules governing identified transactions, and

d) identifying relationships with other processes and systems.

The order in which the analysis elements are performed depends on the nature of the problem being solved. Any documentation available that provides an analysis of the transaction sequences used by the organization should be considered.

The bulk of the work performed consists of a number of interdependent processes, i.e. from such processes that receive information and / or other resources from one process as input and produce information and / or other resources for another process; which must be completed before the next process can begin; or who use data, authorization or materials from already existing sources. In some cases, there is a complete interdependence between the stages of the process, when one stage cannot begin before the other is completed. For example, a stage involving training employees on a specific topic cannot be completed before the corresponding training course has been developed.

In other cases, the dependence may be only partial. For example, the organizational issues of training (such as determining the date and location of training) may begin before the development of the corresponding training course is completed. In other words, although a particular step in the process (step B) may depend on another step in the process (step A), execution of step B may begin before step A is completed.

7.2 Determining the sequence of transactions during the workflow

The first step is to isolate, in appropriate granularity, the sequence of transactions in each process.

Table 4 provides a list of questions to be answered when identifying the sequence of transactions for each process.


Table 4 Identifying the sequence of transactions

What triggers the execution of a sequence of transactions, and how is this documented?

What information and other resources are needed to run a sequence of transactions?

Where do information and other resources come from?

What initiates the execution of the next transactions?

How do the participants in the process know about the completion of each transaction?

Are there parallel sequences of transactions at any point in the process?

If so, at what point do parallel transaction sequences converge?

What are the basic conditions that must be met to obtain permission (authorization) to execute a sequence of transactions?

Where and how are decisions and transactions documented as the sequence of transactions proceeds?

How does a sequence of transactions end, and how is this documented?

7.3 Results of analysis of the sequence of transactions during the workflow

7.3.1 Initially, sequential analysis identifies and documents:

a) the basic or regularly used order of transactions in the workflow,

b) document creation processes, and

c) key transactions that must be completed before the next transaction can begin.

7.3.2 Sequential analysis identifies and documents workflow dependencies, including input from other processes, information and other resources, such as:

a) information about delegation of authority,

b) formalized procedures defining the points of creation, capture and completion of documents,

c) identifying metadata elements, and

d) auditing and monitoring processes requiring documented evidence.

7.4 Identifying and analyzing variation in processes

Many processes consist of a routine sequence of transactions and variations that occur when changes to key elements force a change in the order in which transactions are executed. In order to ensure that the document management system captures information about variations, it is necessary to identify possible variations and the reasons for their occurrence. This element of workflow analysis is critical in appraisal workflows for document capture requirements.

Table 5 provides a list of questions to be answered when identifying and analyzing variation in workflows.


Table 5 Identification and analysis of variations in the process

What happens when these conditions are not met?

What procedures are used to establish these conditions and any changes to them?

Which participant initiates the variation in the process?

What happens if any information or other resources and systems required to complete the process are unavailable?

If a workflow needs to be redirected, where will it be directed?

Are there other, occasionally used ways to execute a sequence of transactions, and if so, why?

What events can prevent a process from running normally?

What happens when a process cannot run normally?

Are there contingency procedures for dealing with when something goes wrong?

Who is responsible for responding to process failures and complaints about process performance?

What information and / or documents are created, stored and / or transferred to other processes in case of variations in the sequence of transactions?

7.5 Results of analysis of process variations

The analysis identifies and documents common variations of the normal process.

The results of the analysis of the usual (regular) process and possible variations can be used to develop a typical scheme (template) for the normal sequence of transactions and for the most common variations. The creation and capture of documents that reflect the process can be built into such a typical scheme. Documents generated by individual transactions within the process should be evaluated to ensure that they remain relevant as they move through the process steps, especially when the sequence of transactions in the process takes the process outside the business unit. in which he was initiated.

Business activity in a purely electronic environment depends, on the one hand, on the systematic documentation of information about the identity of users of the systems available in the organization; and, on the other hand, from documenting the roles, delegated authority and user rights in a particular system. When managing the documents generated during the process, account should be taken of the need to document changes in composition and user rights, synchronized with documented information about role changes over time.

7.6 Establishing rules governing identified transactions

After defining the structure of the sequence of transactions, the reasons for performing each of the steps should be documented. Reasons can range from legal references, organizational procedures manuals, local audit practices and requirements, to the needs of the software application being used.

The rationale for performing each step should be documented from a number of sources as follows:

a) the organization's existing procedures should be consulted;

(b) Interviews should be held with the participants in the process;

c) identify and consult with which manager or supervisor is responsible or accountable for the process;

d) if forms are used to structure the process, each form element should be examined to determine its purpose;

e) audit trails (audit trails) should be reviewed to determine their contribution to the sequence of transactions;

f) process-specific regulatory requirements should be reviewed to document the relevant process elements and identify gaps.

Table 6 provides a list of questions to be answered when establishing the rules governing identified transactions.


Table 6 Establishment of procedural rules governing the execution of transaction sequences

Which transactions must be compliant with legal and regulatory requirements?

Which transactions are primarily determined by the means used by the process (technology used, physical and organizational conditions)?

What transactions are performed to gain access to the information required for the process?

What transactions are required to obtain and document authorization and completion of individual steps?

What transactions are used to monitor the progress of the process and its results?

7.7 Summary of the analysis of the rules governing the execution of transactions

All rules are documented in the approved procedures of the organization.

This element of analysis identifies the requirements for the workflow in terms of generating the evidence needed to evaluate it (appraisal). Where document capture is integrated into the workflow, the reason for each of the transactions should be clear from the generated documents (for example, whether it was authorization, verification, performance metrics, or sign-off completing a sequence of transactions).

Where document capture is a separate step in a process, the reasons for the transactions that constitute the process should be documented in formal procedural process documentation. This element of analysis identifies gaps in the capture of process evidence that need to be addressed when revising the capture requirements.

7.8 Identifying relationships with other processes

This element of analysis allows you to identify the information and other resources that enter the workflow and the information it uses; participants in the process, information and other resources, technology and timing. This analysis goes beyond the scope of a specific process in order to study its connections with other processes (within the organization or in several organizations), from which it receives input information and other resources, and through which the process delivers its results to the organization. To this end, this element of analysis relies on elements of functional analysis in identifying links with other work processes, as well as in identifying the impact of the process in question in other parts of the organization. This analysis helps to accurately determine the cost of the process to the organization.

Table 7 provides a list of questions that should be answered when identifying links with other processes.


Table 7 Identification of connections with other processes

Does this process require input from other processes of information and other resources?

If this is required, then what is the nature of what comes to the input of this process from other processes (information or other resources)?

What documents and / or other sources of information are accessed during the execution of the process under consideration, and how are they modified by this process?

Does the process cover multiple business units, organizations or jurisdictions?

If so, how are other business units, organizations or jurisdictions involved in the process?

Does this process produce outputs that other processes need? If so, what is the nature of these results?

Does the process modify documents or information / data? If so, what is the nature of this modification?

What information and / or documents are created, stored and / or transferred to other processes? Where are they transferred to?

How else are the documents and information generated by this process used?

7.9 Results of the analysis of links with other processes

The links between the specific analyzed workflow and the rest of the organization (organizations) are identified and documented, in particular information about information and other resources entering the input of this process from other processes / systems; about the results obtained at the output of the process and about the documents created during the process.

This element of sequential analysis is key to:

a) examination of the value of documents (appraisal),

b) identification and selection of arrays of documents to be destroyed or transferred for permanent archival storage,

c) developing business classification schemes,

d) identifying redundancy / duplication of documents generated during the processes, and

f) developing a metadata schema.

8 Checking the results of the analysis of work processes by their participants

8.1 General

The review should confirm the completeness of the analysis performed, including that the functional analysis takes into account and correctly grouped all relevant processes, and that all relationships between processes are documented.

In order to ensure the accuracy of data collection and documentation, it is important to verify the results of the analysis of work processes by participants in these processes. The collected reference should first be carefully reviewed by those who provided the relevant information, and then monitored by other participants performing the same or similar responsibilities in other parts of the organization. Where appropriate, the process or elements of it can be performed in real time, thereby providing additional validation of the information collected. An audit is performed to ensure that the workflow and transaction information is correct, and it is assumed that the organization has already completed streamlining its business processes.

When planning a workflow analysis, decisions should be made about who, when, and how to review the analysis results.

8.2 Verification process

Table 8 provides a list of questions that should be answered when checking the analysis results.


Table 8 Checking the results of the analysis of work processes by their participants

Have all required process transactions been analyzed?

Are the documented reasons for each transaction correct?

Are the transaction sequences and their relationships accurately described?

Have variations in transaction sequences been identified and documented?

Have all the processes that make up the business function been identified and documented?

Have the links between processes been accurately established and documented?

Has the context in which the organization operates its work processes has been clearly established and documented?

Are the descriptions and their terminology consistent with those used in the organization so that they can be easily understood?

8.3 Results of the review of the results of the analysis of work processes by their participants

Upon completion of the verification process, the documentation created in the analysis process is approved by the head of the appropriate level as the basis for those actions with the documents in the interests of which the analysis was carried out. Whatever set of elements and / or methods of analysis is used, the verification of the results of the analysis of work processes by their participants is the most important final stage.

Upon completion of the project, all documentation generated from workflow analysis, including diagrams and models, is consolidated.

A final report is issued to the relevant business leaders of the organization and records management personnel, including conclusions, recommendations and an action plan for their implementation.

Appendix YES (reference). Information on the compliance of the referenced international standards and documents with the national standards of the Russian Federation

Appendix YES
(reference)

Information on the compliance of reference international standards and documents with the national standards of the Russian Federation

Designation of the referenced International Standard, document

Degree of compliance

Designation and title of the relevant national standard

ISO 15489-1: 2001

GOST R ISO 15489-1-2007 System of standards for information, librarianship and publishing. Document management. General requirements

ISO / TO 15489-2: 2001

ISO 23081-1: 2006

GOST R ISO 23081-1-2008 System of standards for information, librarianship and publishing. Document management processes. Metadata for documents. Part 1. Principles

ISO 23081-2: 2009

* There is no corresponding national standard. Prior to its approval, it is recommended to use the Russian translation of this International Standard. The translation of this international standard is in the Federal Information Fund for Technical Regulations and Standards.

Note - In this table, the following convention is used for the degree of conformity of standards:

IDT are identical standards.

Appendix YES. (Amendment. IUS N 2-2015).


UDC 655: 006.034 OKS 01.140 T62

Key words: book editions, text block of editions, critical defects of editions, publishing and printing design of editions

__________________________________________________________________________



Electronic text of the document
prepared by JSC "Kodeks" and verified by:
official publication
M .: Standartinform, 2014

Document revision taking into account
changes and additions prepared
JSC "Codex"

Network monitoring and analysis

Constant monitoring of the network is necessary to maintain it in working order. Control is a necessary first step that must be performed in network management. This process of network operation is usually divided into 2 stages: monitoring and analysis.

At the monitoring stage, a simpler process is performed. procedure - procedure collection of primary data on network operation: statistics on the number of frames and packets of various protocols circulating in the network, the status of ports on hubs, switches and routers, etc.

Next, the analysis stage is performed, which is understood as a more complex and intelligent process of understanding the information collected at the monitoring stage, comparing it with the data obtained earlier, and making assumptions about the possible causes of slow or unreliable network operation.

Tools for monitoring the network and detecting bottlenecks in its operation can be divided into two main classes:

  • strategic;
  • tactical.

The purpose of strategic tools is to control a wide range of parameters of the entire network and to solve problems of LAN configuration.

The purpose of tactical tools is to monitor and troubleshoot network devices and network cable.

Strategic assets include:

  • network management systems
  • built-in diagnostic systems
  • distributed monitoring systems
  • diagnostic tools for operating systems running on large machines and servers.

Network management systems developed by companies such as DEC, Hewlett - Packard, IBM, and AT&T provide the most complete oversight. These systems are usually based on separate computer and include control systems for workstations, cabling, connecting and other devices, a database containing control parameters for networks of various standards, as well as a variety of technical documentation.

One of the best solutions for network management, allowing the network administrator to access all of its elements right down to the workstation, is the LANDesk Manager package. Intel monitoring application programs, hardware inventory and software tools and virus protection. This package provides a variety of real-time information about application programs and servers, data on users' network activity.

Embedded diagnostics have become a common feature in networking devices such as bridges, repeaters, and modems. Examples of such systems are Open - View Bridge Manager from Hewlett - Packard and Remote Bridge Management Software from DEC. Unfortunately, most of them are focused on the equipment of one manufacturer and are practically incompatible with the equipment of other companies.

Distributed monitoring systems are special devices installed on network segments and designed to obtain comprehensive information about traffic, as well as network disruptions. These devices, usually connected to an administrator workstation, are primarily used on multi-segment networks.

Tactical means include different kinds testing devices (testers and scanners of a network cable), as well as devices for complex analysis of network operation - protocol analyzers. Testing devices help the administrator to detect network cable and connector faults, and protocol analyzers help to obtain information about the exchange of data on the network. In addition, this category of tools includes special software that allows you to receive detailed reports on the status of the network in real time.

Monitoring and analysis tools

Classification

The whole variety of tools used for monitoring and analyzing computer networks can be divided into several large classes:

Network management systems(NetworkManagementSystems) - centralized software systems that collect data on the state of nodes and communication devices on the network, as well as data on traffic circulating in the network. These systems not only monitor and analyze the network, but also perform network management actions in automatic or semi-automatic mode - enabling and disabling device ports, changing the parameters of bridges in address tables of bridges, switches and routers, etc. Examples of management systems are popular HPOpenView, SunNetManager, IBMNetView systems.

System controls(SystemManagement). System controls often perform functions similar to control systems, but in relation to other objects. In the first case, the control object is the software and hardware of the computers on the network, and in the second, the communication equipment. However, some of the functions of these two types of control systems can be duplicated, for example, system controls can perform the simplest analysis of network traffic.

Built-in diagnostics and control systems(Embeddedsystems). These systems are implemented in the form of software and hardware modules installed in communication equipment, as well as in the form of software modules built into operating systems. They perform diagnostic and control functions for only one device, and this is their main difference from centralized control systems. An example of this class of tools is the Distrebuted 5000 hub management module, which implements the functions of auto-segmentation of ports upon detection of faults, assigning ports to internal segments of the hub, and some others. As a rule, built-in management modules "part-time" act as SNMP agents that provide device status data for management systems.

Protocol Analyzers(Protocolanalyzers). They are software or hardware-software systems that, unlike control systems, are limited only by the functions of monitoring and analyzing traffic in networks. A good protocol analyzer can capture and decode packets of a large number of protocols used in networks - usually several dozen. Protocol analyzers allow you to set some logical conditions for capturing individual packets and perform complete decoding of captured packets, that is, they show, in a convenient form for a specialist, the nesting of protocol packets of different levels into each other with decoding of the content of individual fields of each packet.

NS expert systems... This type of systems accumulates the knowledge of technical specialists about identifying the causes of abnormal network operation and possible ways to bring networks into a working state. Expert systems are often implemented as separate subsystems of various network monitoring and analysis tools: network management systems, protocol analyzers, network analyzers. The simplest variant of an expert system is a context-sensitive help-system. More complex expert systems are so-called knowledge bases with elements of artificial intelligence. An example of such a system is the expert system built into the Cabletron Spectrum control system.

Equipment for diagnostics and certification of cable systems... Conventionally, this equipment can be divided into four main groups: network monitors, devices for certification of cable systems, cable scanners and testers (multimeters).

Network monitors(also called network analyzers) are designed to test various categories of cables. A distinction should be made between network monitors and protocol analyzers. Network monitors collect data only on traffic statistics - average total network traffic, average packet flow with a certain type of error, and so on.

Assigning devices to certification of cable systems, directly follows from their name. Certification is carried out in accordance with the requirements of one of the international standards for cabling systems.

Cable scanners are used to diagnose copper cable systems.

Testers are designed to check cables for physical breakage.

Multifunctional devices for analysis and diagnostics. In recent years, in connection with the ubiquity of local networks, it became necessary to develop inexpensive portable devices that combine the functions of several devices: protocol analyzers, cable scanners, and even some capabilities of network management software. Compas from Microtest Inc. is an example of this type of device. or 675 LANMeter from FlukeCorp.

Protocol Analyzers

During the design of a new or modernization of an old network, it is often necessary to quantify some characteristics of the network, such as the intensity of data flows over network communication lines, delays occurring at various stages of packet processing, response times to requests of one type or another, the frequency of occurrence. certain events and other characteristics.

For these purposes, different means can be used, and above all - monitoring tools in network management systems, which have already been discussed in the previous sections. Some measurements on the network can also be performed by software meters built into the operating system, an example of which is the WindowsNTPerformanceMonitor OS component. Even modern cable testers are capable of capturing packets and analyzing their contents.

But the most advanced network exploration tool is a protocol analyzer. The protocol analysis process involves capturing and examining the packets that are circulating in the network that implement a particular network protocol. Based on the results of the analysis, you can make informed and balanced changes to any network components, optimize its performance, and troubleshoot. Obviously, in order to be able to draw any conclusions about the impact of some change on the network, it is necessary to analyze the protocols both before and after the change.

The protocol analyzer is either a stand-alone specialized device, or Personal Computer, usually portable, of the Notebook class, equipped with a special network card and related software. The network card and software used must correspond to the network topology (ring, bus, star). The analyzer connects to the network in the same way as a normal node. The difference is that the analyzer can receive all data packets transmitted over the network, while an ordinary station can only receive those addressed to it. The analyzer software consists of a kernel that supports the operation of the network adapter and decodes the received data, and additional program code that depends on the type of topology of the network under investigation. In addition, a number of protocol-specific decoding routines are provided, such as IPX. Some analyzers may also include an expert system that can give the user recommendations on what experiments should be carried out in a given situation, what these or those measurement results may mean, and how to eliminate some types of network malfunctions.

Despite the relative diversity of protocol analyzers on the market, there are some features that are more or less inherent in all of them:

  • User interface. Most analyzers have a well-developed user-friendly interface, usually based on Windows or Motif. This interface allows the user to: display the results of traffic intensity analysis; get an instant and average statistical estimate of the network performance; set specific events and critical situations to track their occurrence; decode protocols of different levels and present the contents of packets in an understandable form.
  • Capture buffer. The buffers of the various analyzers differ in size. The buffer can be located on the installed network card, or it can be allocated space in the RAM of one of the computers on the network. If the buffer is located on a network card, then it is managed in hardware, and due to this, the input speed increases. However, this leads to an increase in the cost of the analyzer. In case of insufficient performance of the capture procedure, some of the information will be lost, and analysis will be impossible. The buffer size determines the analysis capabilities for more or less representative samples of the captured data. But no matter how large the capture buffer is, sooner or later it will fill up. In this case, either the capture stops, or the filling starts from the beginning of the buffer.
  • Filters. Filters allow you to control the process of capturing data, and thus save buffer space. Depending on the value of certain fields of the packet, specified as a filter condition, the packet is either ignored or written to the capture buffer. The use of filters greatly speeds up and simplifies the analysis, as it excludes viewing the packages that are not needed at the moment.
  • Switches are some conditions set by the operator for starting and stopping the process of capturing data from the network. These conditions can be the execution of manual commands to start and stop the capture process, the time of day, the duration of the capture process, the appearance of certain values ​​in data frames. Switches can be used in conjunction with filters, allowing for more detailed and subtle analysis, as well as more productive use of a limited size of the capture buffer.
  • Search. Some protocol analyzers allow you to automate the viewing of information in the buffer and find data in it according to specified criteria. While the filters check the input stream against the filtering conditions, the search functions are applied to the data already accumulated in the buffer.

The analysis methodology can be presented in the following six stages:

  1. Capturing data.
  2. View captured data.
  3. Data analysis.
  4. Search for errors. (Most analyzers make this easier by identifying error types and identifying the station from which the error packet came.)
  5. Performance research. The utilization rate of the network bandwidth or the average response time to a request is calculated.
  6. A detailed study of individual sections of the network. The content of this stage is concretized as the analysis is carried out.

Usually, the process of analyzing the protocols takes relatively little time - 1-2 business days.

Network analyzers

Network analyzers (not to be confused with protocol analyzers) are the reference measurement tools for diagnosing and certifying cables and cabling systems. An example is the network analyzers from HewlettPackard - HP 4195A and HP 8510C.

Network analyzers contain a high-precision frequency generator and a narrow-band receiver. By transmitting signals of different frequencies to the transmitting pair and measuring the signal in the receiving pair, attenuation and NEXT can be measured. Network analyzers are precision large and expensive (over $ 20'000) instruments designed for use in the laboratory by trained technicians.

Cable scanners

These devices allow you to determine the cable length, NEXT, attenuation, impedance, wiring diagram, electrical noise level and evaluate the results obtained. The prices for these devices range from $ 1,000 to $ 3,000. There are many devices of this class, for example, scanners from Microtest Inc., Fluke Corp., Datacom Technologies Inc., Scope Communication Inc.. Unlike network analyzers, scanners can be used not only by specially trained technical personnel, but even by novice administrators.

The cable radar method, or TimeDomainReflectometry (TDR), is used to locate a cabling fault (open circuit, short circuit, misplaced connector, etc.). The essence of this method is that the scanner emits a short electrical pulse into the cable and measures the delay time until the arrival of the reflected signal. The nature of the cable damage (short circuit or open circuit) is determined by the polarity of the reflected pulse. In a correctly installed and connected cable, there is no reflected pulse at all.

Distance measurement accuracy depends on how accurately the speed of propagation of electromagnetic waves in the cable is known. It will differ from cable to cable. The speed of propagation of electromagnetic waves in a cable (NVP - nominalvelocityofpropagation) is usually given as a percentage of the speed of light in a vacuum. Modern scanners contain a spreadsheet of NVP data for all major cable types and allow the user to set these parameters themselves after preliminary calibration.

The most well-known manufacturers of compact (their size usually does not exceed the size of a VHS video cassette) cable scanners are Microtest Inc., WaveTek Corp., Scope Communication Inc.

Testers

Cable testers are the simplest and cheapest cable diagnostics tools. They allow you to determine the continuity of the cable, however, unlike cable scanners, they do not provide an answer to the question of where the failure occurred.

Built-in network monitoring and analysis

SNMP agents

Today there are several standards for databases. management information... The main ones are the MIB-I and MIB-II standards, as well as the version of the database for remote control RMONMIB. In addition, there are standards for specific device MIBs of a particular type (for example, MIBs for hubs or MIBs for modems), as well as proprietary MIBs of specific equipment manufacturers.

The original MIB-I specification only defined read operations on variable values. The operations of changing or setting the values ​​of an object are part of the MIB-II specifications.

The MIB-I version (RFC 1156) defines up to 114 objects, which are classified into 8 groups:

  • System - general information about the device (for example, vendor ID, time of last system initialization).
  • Interfaces - describes the parameters of the device's network interfaces (for example, their number, types, exchange rates, maximum size package).
  • AddressTranslationTable - describes the correspondence between network and physical addresses (for example, using the ARP protocol).
  • InternetProtocol - data related to the IP protocol (addresses of IP gateways, hosts, statistics about IP packets).
  • ICMP - data related to the ICMP control message exchange protocol.
  • TCP - data related to the TCP protocol (for example, about TCP connections).
  • UDP - data related to the UDP protocol (number of transmitted, received and erroneous UPD datagrams).
  • EGP - data related to the ExteriorGatewayProtocol routing exchange protocol used on the Internet (the number of messages received with and without errors).

From this list of variable groups, it can be seen that the MIB-I standard was developed with a rigid focus on managing routers that support the TCP / IP stack protocols.

In the MIB-II version (RFC 1213), adopted in 1992, the set of standard objects was significantly expanded (to 185), and the number of groups increased to 10.

RMON agents

The newest addition to SNMP functionality is the RMON specification, which allows remote communication with the MIB. Prior to RMON, SNMP could not be used remotely, it only allowed local management of devices. The RMONMIB base has an improved set of properties for remote management, since it contains aggregated information about the device, which does not require the transfer of large amounts of information over the network. RMONMIB objects include additional packet error counters, more flexible graphical trend analysis and statistics, more powerful filtering tools for capturing and analyzing individual packets, and more complex alert conditions. RMONMIB agents are more intelligent than MIB-I or MIB-II agents and do much of the work of processing device information that managers used to do. These agents can be located inside various communication devices, and also be implemented as separate software modules running on universal PCs and laptops (an example is LANalyzerNovell).

The RMON object is numbered 16 in the MIB object set, and the RMON object itself contains 10 groups of the following objects:

  • Statistics - current accumulated statistics on packet characteristics, number of collisions, etc.
  • History - statistical data saved at regular intervals for subsequent analysis of trends in their changes.
  • Alarms - statistic thresholds above which the RMON agent sends a message to the manager.
  • Host - data about hosts on the network, including their MAC addresses.
  • HostTopN is a table of the busiest hosts on the network.
  • TrafficMatrix - statistics about the traffic intensity between each pair of hosts on the network, sorted in a matrix.
  • Filter - packet filtering conditions.
  • PacketCapture - conditions for capturing packets.
  • Event - conditions for registering and generating events.

These groups are numbered in the order shown, so, for example, the Hosts group has the numeric name 1.3.6.1.2.1.16.4.

The tenth group consists of special objects of the TokenRing protocol.

In total, the RMONMIB standard defines about 200 objects in 10 groups, recorded in two documents - RFC 1271 for Ethernet networks and RFC 1513 for TokenRing networks.

A distinctive feature of the RMONMIB standard is its independence from the network layer protocol (in contrast to the MIB-I and MIB-II standards, oriented to the TCP / IP protocols). Therefore, it is convenient to use in heterogeneous environments using different network layer protocols.

Literature

  • V. G. Olifer, N. A. Olifer. Computer networks.

Wikimedia Foundation. 2010.

See what "Network Monitoring and Analysis" is in other dictionaries:

    Main article: Program evaluation Program monitoring From a methodological point of view, program monitoring can be viewed as an evaluation procedure, the purpose of which is to identify and / or measure the effects of ongoing actions without ... ... Wikipedia

    The style of this article is not encyclopedic or violates the norms of the Russian language. The article should be corrected according to the stylistic rules of Wikipedia ... Wikipedia

    This article or section needs revision. Please improve the article in accordance with the rules for writing articles. The term network monitoring is called ... Wikipedia

    - (environmental monitoring) is a complex system for observing the state of the environment, assessing and forecasting changes in the state of the environment under the influence of natural and anthropogenic factors. Usually the territory already has ... Wikipedia

    network monitoring 3.30 network monitoringprocess of continuously observing and analyzing recorded data of network activity and operations, including audit logs and alarms, and related analysis. A source … Dictionary-reference book of terms of normative and technical documentation

Process documentation

In accordance with STB ISO 9001, section 4.1, the list of mandatory processes that must be documented is not regulated. Each organization independently determines which processes should be documented, guided by customer requirements, regulations, field of activity, and its corporate strategy.

The scope of documentation in the quality management system is determined by the organization's management based on the following requirements:

  • ensure the reproducibility of any process and the fulfillment of the requirements of STB ISO 9000 by the personnel of the enterprise;
  • ensure the possibility of proving the conformity of the quality management system to the requirements of STB ISO 9001 during audits;
  • fulfill the requirements of STB ISO 9001 for documenting procedures.

However, the document contains a number of requirements that an organization can demonstrate within the framework of a quality management system through the development of a number of documents. Among them, descriptions of processes should be highlighted, which may include:

  • process maps;
  • flowcharts of processes;
  • descriptions of processes in any acceptable form.

In this case, various methods can be used: graphic, verbal, visual, electronic.

The level of detail of the process descriptions should be determined based on the need and sufficiency to ensure the effectiveness of the process management. In accordance with STB ISO 9001, the following are subject to documentation within the process: planning and provision, process management, resources, control processes.

STB ISO 9001, section 4.2.1, mentions the following categories of documents on processes within the framework of a quality management system:

  • descriptions of processes;
  • procedures.

Notes (edit)

1. Since process descriptions are used in various documents of the quality management system, and STB ISO 9000 is based on the principle of a systematic approach to quality management, the creation of process descriptions precedes the creation of other documents in the quality management system. Consequently, the creation of process descriptions is the basis for the creation of documentation in the quality management system. In this context, the process description is the basis for creating a procedure.

2. Documents containing indirect information about processes (references to processes), for example, quality manuals, quality plans, job descriptions are not taken into account here.

3. Descriptions of processes, unlike six mandatory procedures, are not mandatory documents (they are not a mandatory element of the document system) of the quality management system in accordance with STB ISO 9001.

In the quality management system, a distinction should be made between the purpose of the process description and the procedure.

The process description defines the essence of the process and its structure. The purpose of the description is to effectively plan, maintain, manage, and improve the process.

The procedure defines the sequence of actions within the process, which in the given conditions (ie "here and now") ensures the specified quality of the process. The essence of the procedure is the algorithm for executing the process in specific conditions.

NOTE A common way of representing algorithms is flowcharts that can be used as a way to represent process procedures in a quality management system.

The description of the process is primary in relation to the procedure and is the basis for the development of the latter, but not vice versa. It should be noted that for the same process there may be several procedures that differ, for example, in the conditions of their execution, sequence of actions, etc.

NOTE Flow charts do not provide a structure for processes and are therefore not an adequate way to describe processes. Other methods are used to describe processes that meet quality management requirements. This document proposes a method based on the IDEF0 functional modeling methodology.

Composition and structure of process documentation

Process documentation, used for effective planning, maintenance, management, and improvement, includes a process listing and process description.

List of processes

The list of processes contains the following:

  • records to identify process descriptions;
  • information that identifies the location of the Process List document in higher-level documentation, such as quality manuals;
  • information that allows you to identify the state of the document "List of processes": status (working version, approved, etc.), date of creation, author, date of approval, person who approved the document, date of modification, filing, etc.

NOTE The elements that make up a Process List document are governed by the appropriate organization's document control processes and procedures.

Process description

The process description includes the following:

  • information describing the process, including the name of the process, the internal structure of the process, i.e. the elements that make up the process, and the relationships between them, a description of the relationship of the process with other processes in the organization, a description of the owners of the process, consumers of the results of the process, providers of inputs and resources necessary for the execution of the process.

NOTE The level of detail (depth) of the process description is determined based on the complexity of the process, the size of the organization and the management needs of the organization;

  • process glossary.

NOTE - In cases where the process description uses terms that already exist in the organization (the definition of which is available in other documents of the organization), instead of the definition of the term, a reference to the document where this definition already exists is used;

  • information that identifies the place of the Process Description document in the higher-level documentation system, such as a quality manual or a documented procedure;
  • information that allows you to identify the state of the document "Description of the process": status (working version, approved, etc.), date of creation, author, date of approval, person who approved the document, date of modification and date of filing, etc.

NOTE The elements that make up a Process Description document are governed by the organization's appropriate document control processes and procedures.

Determining the processes required for the system, their sequence and interaction is one of the most important and rather complex tasks in system development.

The entire defined network of processes must be described in the organization's quality manual or presented in the form of a diagram (map) of the organization's processes included in the quality manual or in a separate document (except for the case when the description of processes is based on the IDEF0 methodology).

A process can (and, as a rule, does) consist of sub-processes, and these, in turn, can also consist of sub-processes. These processes can be called differently - process 1-, 2-, 3-rd level, etc. The degree (depth) of decomposition of processes is determined by the organization itself. In the ISO 9000 version 2000 family of standards, there is no mention of processes at different levels and the use of different names (process, sub-process, decomposition, etc.). One name is used there - process. In principle, a process can generally consist of one type of activity.

When defining processes, one must start by defining the highest-level processes, i.e. processes that ensure the implementation of the organization's business strategy and the consumers of which is an external consumer. But, as a rule, these processes consist of a number of lower-level processes included in them, the consumers of which are already internal consumers. In turn, each process (of any level) requires the creation of certain conditions that ensure its implementation, and, in addition, they are all subject to control.

Thus, taking into account the decomposition, starting from the processes, the consumers of which are external consumers, as well as the definition of the supporting processes and management processes, there can be several dozen of them. At the same time, all interrelationships of processes should be clearly described or visible on the diagram (map) of the organization's processes.

It is impractical to stop the decomposition at very high levels, since in this case it is unlikely that it will be possible to ensure effective planning, implementation and management of processes.

It also makes no sense to carry out decomposition to such a state when the process consists of one type of activity. And in general, the level of decomposition should be optimal, otherwise the essence of the process approach and the role of the process owner are lost.

It should be noted that all processes can be attributed to one of 4 blocks, which are highlighted in STB ISO 9001-2001:

- the responsibility of the management;

- resource management;

- product life cycle processes;

- measurement, analysis and improvement.

This binding can also be used to identify processes.

Once the processes have been defined, it is necessary to define a leader for each process (owner, responsible, etc.), assigning him responsibility for the operation and improvement of this process and giving him certain powers that allow him to manage the process.

Considering that the process, as a rule, includes several types of activities and covers several departments, it is advisable for the process manager to determine who is responsible for the key area of ​​the process (type of activity, process).

When determining the processes, their leaders and the scheme of interaction, it may be necessary and advisable to change the organizational structure of the organization.

Each process must have an input and an output that must be defined. Each process should give some expected result from it and the achievement of this result should be evaluated both in the implementation and in the management of the process. These results should be identified and they should be monitored and, where appropriate, measured.

Note. Decision-making based on these criteria is carried out taking into account the results of monitoring processes processed according to the methodology developed by the organization.

Once the network of processes and the expected results have been identified, and organizational changes are made as needed, work can be done to deploy the organization's quality objectives to objectives for the appropriate departments and levels of the organization.

An essential question when creating documentation is: when it is necessary to write a procedure, but you can get by with another document, for example, working instructions or a plan? The following signs can be used when it is recommended to write procedures:

- whether the activity is a process: whether it is required to indicate the input, output, resources used;

- have quality objectives been formulated for the activity;

- whether it is necessary to assess the achievability and effectiveness of the activity;

- in general, whether this activity affects the quality or not.

For accuracy, here is the definition: ”Procedure - established way carrying out an activity or a process. Procedures can be documented or undocumented. "

Another option for organizing work is the following:

- all documented procedures available within the framework of the current system are “laid out” according to already defined processes. This can be done in the form of tables or matrices;

- empty places (in the table) or empty positions (in the matrix) are revealed;

- a list of missing procedures is drawn up;

- the compliance of the existing procedures with the requirements of STB ISO 9001-2001 and the quality objectives for the relevant departments and at the appropriate levels is analyzed in order to make a decision on the need to revise the existing procedures;

- a list of procedures to be developed is drawn up;

- other methods acceptable for the organization are being worked out (except for documenting as a document of the system) of communicating procedures to the employees of the organization (performers).

As the documented procedures or other methods of communicating the procedures to the performers are developed, they can be transferred for pilot implementation. The necessity and feasibility of a pilot implementation is determined by the organization independently.

It is impossible to ensure the implementation of new documented procedures or other methods of bringing the worked out procedures of the system to the performers without training performers of all levels. That is why in the organization, before and during the pilot implementation of procedures, multilevel training should be organized for all employees of the organization.

Based on the purpose and objectives of documentation, the quality system documentation created at the enterprise must meet a number of strict requirements. The main ones are:

1. Documentation must be systematic, ie. structured in a certain way, with clear internal links between the elements of the quality system.

2. The documentation must be comprehensive, ie. cover all aspects of activities in the quality system, including organizational, economic, technical, legal, socio-psychological, methodological.

3. The documentation must be complete, ie. contain comprehensive information on all processes and procedures carried out in the quality system, as well as on how to record quality data. At the same time, the amount of documentation should be minimal, but sufficient for practical purposes.

4. Documentation should be adequate to the recommendations and requirements of the ISO 9000 family of standards. For this purpose, it is advisable in the introductory part of each standard to give an exact reference to a specific section or clause of the standard in accordance with which this document was developed.

5. The documentation should only contain requirements that are practically fulfilled. It is impossible to establish unrealistic positions in it.

6. Documentation must be easily identifiable. This assumes that each document of the quality system must have an appropriate name, symbol and code that allows you to establish its belonging to a specific part of the system.

7. Documentation must be targeted, ie. each quality system document should be designed to specific area application and addressed to specific performers.

8. The documentation must be up to date. This means that the documentation as a whole, and each of its individual documents, must timely reflect the changes occurring in the standards of the ISO 9000 family and changes in the quality assurance conditions at the enterprise.

9. Documentation should be understandable to all its users - managers, specialists and executors. The text of the document should be short, precise, not allowing different interpretations, logically consistent, including the most necessary and sufficient for its use.

10. The documentation must have an authorized status, ie. each quality system document and all documentation as a whole must be approved or signed by authorized officials.

The quality system should provide for the correct labeling, distribution, collection and maintenance of all quality management documents.

The composition of the sections of the documented procedure in the general case may contain:

- the purpose and / or purpose of the procedure;

- application area;

- terms, definitions, abbreviations and abbreviations;

- responsibility and authority;

- a description of the activity in accordance with the purpose of the procedure;

Registered data;

- applications.

Information on the agreement, approval, revision of the documented procedure should also be indicated.

The purpose and / or purpose of the documented procedure can be determined taking into account the direction of the activities described in the procedure.

For example, the goal established in the corrective action procedure may be to eliminate the causes of identified nonconformities and prevent their reoccurrence, the purpose of the procedure is to establish a procedure for the development and implementation of corrective actions.

It is recommended to construct sections of a documented procedure, including its scope, normative references, terms and definitions, in accordance with STB 1.5.

The Responsibility and Authority section defines the responsibilities, authorities and interactions of personnel associated with the activities and / or processes described in the procedure.

The responsibilities and authorities of personnel for the functions performed can be presented in text form, in the form of tables and / or indicated in flowcharts provided in a documented procedure.

The organization can perform the description of activities in accordance with the purpose of the procedure with varying degrees of detail, depending on the complexity of a particular type of activity and training of personnel.

- input data;

- resources for the implementation of activities (personnel, documentation, equipment, materials);

- the algorithm of the activities performed, the sequence of actions performed in accordance with the established goal and purpose of the procedure;

- methods and means of monitoring;

- analyzed data on performance results, output data.

Flow diagrams may be used to describe activities using the symbols in Appendix A.

When describing activities, it is advisable to follow the Deming cycle methodology: planning - executing - checking - impact.

The registered data (or records) are established with the definition of the form of their registration with the subsequent application of the records management procedure to them.

In accordance with the recommended content of the documented procedure, based on the analysis of performance data, the need for improvement and revision of the procedure is determined. Information on the revision and / or amendments to the documented procedure is reflected in the order determined by the enterprise. The recommended order is in accordance with STB 1.5.

The documented procedures may contain references to work instructions that define the way activities are carried out. The structure of work instructions may differ from the structure of documented procedures.

Documented procedures can describe activities that include various interrelated functions, whereas work instructions are usually used to describe one function in a specific activity.

Documented procedures can be developed and presented on paper and / or electronically.

Electronic submission and maintenance of documents has the following advantages:

permanent access to the information of the relevant personnel;

- easily carried out updating and control of documentation;

- fast dissemination of information, the ability to print paper identified copies, for example by date;

- simple and effective cancellation of obsolete documents.

When building a documented procedure for document control, it is recommended to describe the main functions performed in this procedure:

- determining the need for documentation;

- planning the development or acquisition of documents;

- development, coordination, approval, implementation;

- revision, re-approval of documents;

- provision with updated documents of divisions;

- alteration;

- cancellation, withdrawal of documents, prevention of the use of obsolete documents.

The following documents of the quality management system are subject to management:

- documents of the quality management system (quality policy, quality manual, documented procedures, documents necessary to ensure the implementation of processes, work instructions, etc.);

- regulatory documents (GOST, STB, TU, etc.);

- technical documentation (CD, TD);

- regulations on divisions, job descriptions.

It is recommended that a documented procedure for managing quality records establish:

- the composition of the recorded quality data and the forms of their registration;

- responsibility for registration of records;

- the procedure for recording the registered data and their use;

- the procedure for storing, protecting and restoring records (if necessary);

- addresses, transmission channels and type of transmitted information (routes of information movement);

- interaction of subdivisions during transmission and receipt of registered data;

- storage periods, the procedure for withdrawing quality records.

In the documented procedure for internal audits, it is recommended to establish a procedure for planning, conducting and recording the results of internal audits, determining subsequent actions, and responsibility for performing work.

Follow-up actions typically include corrective actions taken to eliminate identified nonconformities and their causes, timing and responsibility for implementation.

The follow-up also includes:

- verification of implementation;

- assessment of the timeliness and effectiveness of corrective actions;

- assessment of the effectiveness of internal audits.

It is recommended that a documented procedure for the management of nonconforming product be established to:

- detection, identification, registration of nonconforming products;

- Isolation of nonconforming products to avoid mixing with suitable products;

- determining the possibility of revision and further use of nonconforming products and making an appropriate decision by competent personnel;

- disposal of non-conforming products;

- analysis of the reasons for the manufacture of nonconforming products.

The level of responsibility and authority of decision-makers on nonconforming products should be consistent with the significance and potential consequences of the identified nonconformity. It is recommended that the authority to make decisions be documented.

It is recommended that the documented corrective action procedure establish:

- sources of information;

- the procedure for collecting information on existing inconsistencies;

- responsibility and procedure for establishing the causes of inconsistencies;

- planning and order of implementation of corrective actions;

- the procedure for assessing the effectiveness of corrective measures;

- interaction of departments and personnel in the implementation of these actions.

Sources for corrective action may include:

- consumer complaints;

- the output of the management review;

- appropriate records of the performance of the quality management system;

- output data for assessing the degree of customer satisfaction;

- data on the qualifications and training of personnel;

- the results of monitoring and measuring processes;

- data on nonconforming products;

- the results of external and internal audits.

A documented preventive action procedure is recommended to reflect:

- identification of potential nonconformities based on data analysis;

- analysis of the causes of potential inconsistencies;

- assessment of the need for preventive action;

- determination, planning and development of preventive actions;

- the procedure for the implementation and registration of preventive actions;

- assessment of the effectiveness of preventive actions.

Quality documentation is not created once and for all time - it is constantly revised. Therefore, document control is a critical element in a quality management system.

Reasons for creating new or changing existing documents QMS are:

- formation of QMS requirements and procedures;

- the emergence of new directions in the activities of the organization;

- the results of internal and external audits;

- change (improvement) of the organization's policy in the field of quality;

- the emergence of new versions of international standards ISO 9000 series;

- conditions of contractual situations in terms of QMS.


Top