Decentralized and distributed databases. Decentralized and distributed databases Decentralized network

Let's take a closer look at centralized and decentralized (peer-to-peer) computing networks. Network data is classified according to distribution of management functions network.

Regardless of the type of networks, they all have common components, functions, parameters. From the point of view of operation in the network, the following components are distinguished:

1. Server (server) - a PC that shares its resources.

2.Customer (client) - PC used for user access to network resources.

3.Cpeda- (media) - PC connection method (network topology, cable, network card, modem, etc.).

4.Pecypcs (resources) - files presented by the server over the network, databases, shared peripherals (such as printers, CD libraries, etc.), etc.

There are two types of networks according to the access method:

1. Peer-to-peer networks (workgroups);

2. Networks based on the server (server based or server networks).

Server, which is not used as a user's workstation is called dedicated. A server that can be used as a workstation is called non dedicated.

Network Administration (administration).

Administration - centralized sharing of information access in the network.

Administration tasks:

    user experience and data protection management;

    providing access to resources;

    application and data support;

    installation and modernization of application software;

Consulting and providing the user.

1.1. Peer-to-peer networks

In a peer-to-peer network, all computers are "equal" - each of the computers can act as both a client and a server. It turns out that in a peer-to-peer network, all servers are not dedicated. Peer-to-peer networks are also often referred to as workgroups.

Peer-to-peer networks have the following features:

    dimensions - usually small (8-10 computers);

    cost - relatively low as it does not require expensive network equipment and additional: specialized software;

    operating system (OS) - there is no need to purchase a special network operating system (many operating systems have built-in capabilities for combining computers into a peer-to-peer network);

Any peer-to-peer network is characterized by a number of standard solutions:

    computers are located on desktops;

    users themselves act as administrators and ensure the protection of information;

    a simple cabling system is used to network computers.

Data protection.

The user is himself an administrator and protects his data in the workplace. Typically, data protection comes down to setting a password on a shared resource.

User preparation.

The user must be highly qualified to work both as a user and as an administrator of his PC.

1.2. Dedicated Server Networks

If the network ~ has dedicated servers, then such a network is called a server based network. In such networks, the server is usually a specialized computer optimized to quickly process the requests of many network clients and manage data protection.

In large networks, tasks performed by a server are distributed among several servers. This guarantees the efficiency of each task assigned to the server. These servers are called dedicated servers.

Some types of dedicated servers are listed below.

File Serveris a server that controls access to shared files.

Read server (print server) -controls access to printers and printing processes.

Applications server- these servers run the server parts of the client-server architecture applications, and also contains the data available to the clients.

Mail server -manages the transmission of electronic messages between users.

Fax Server- controls the flow of incoming and outgoing faxes.

Communication Server- controls the flow of data between different networks.

Advantages over a peer-to-peer network.

1 . Resource sharing and administration.

The server hardware is designed to provide maximum performance when sharing files and printers, all administration can be done centrally.

2 Protection of information.

Security management is centralized, so the level of security in a server-based network is much higher than in a peer-to-peer network.

3 Network size.

Dedicated server networks can have 1000 or more users.

4 Fault tolerance.

Server-based networks are highly resilient to failures and data loss.

5 Backup.

Backups can be done centrally.

6 Hardware

For a server platform, it is advisable to choose a computer specially designed for this type of work, while the hardware of workstations should only meet the needs of the user. With a large number of user seats, you can save on their hardware without compromising the quality of service.

The flow of information today is enormous, so it is difficult to store it in such a way that at any moment you can easily find what you need. To store large amounts of information, databases are used, which are an ordered set of information. All databases can be divided into three types:

... In this case, all data is written into a single array, which is stored on one computer. To get information, you need to connect to a host computer called a server.

... In this case, there is no single central repository. Several servers provide information to clients. These servers are connected to each other.


Distributed . There are no data stores. All nodes contain information. All clients are equal and have the same rights.


Database Applications

While databases have been around for a long time, there are a number of difficulties in using them.

  • Safety. Anyone who has access to the information server can add, modify and delete data.
  • Reliability. If multiple requests are received at the same time, the server may crash and stop responding.
  • Availability. If there are problems in the central repository, you will not be able to get the information you need until the problems are resolved. In addition, although different users have different needs, the process of accessing information is uniform and can be inconvenient for customers.
  • Data transfer rate. If the nodes are in different countries or on different continents, the connection to the server may be difficult.
  • Scalability. Centralized networks are difficult to scale up as server performance and throughput communication lines are limited.

Decentralized and distributed databases solve all of these problems.

Decentralized Database Security

There is no centralized storage in such databases. This means that all data is distributed among the network nodes. If something is added, edited or deleted on any of the computers, it will be reflected on all computers on the network. If authorized changes are made, the new information is distributed over the network to other users. Otherwise, the data will be recovered from backupto make them match with other nodes. Thus, the system is self-contained and self-regulating. Such databases are protected from deliberate attacks or accidental changes to information.

Reliability, availability and data transfer speed in decentralized networks

Decentralized networks are capable of handling significant loads.

Data is available on all network nodes. Therefore, incoming requests are distributed among the nodes. Thus, the load falls not on one computer, but on the entire network. The overall performance of such a network is much higher than that of a centralized one.

Given that decentralized and distributed networks consist of a large number of computers, a DDoS attack will only be successful if its performance is much higher than that of the network. But organizing such an attack would be extremely expensive. Therefore, it can be considered that decentralized and distributed networks are safe.

Users can be located all over the world, and everyone can have problems with the Internet. In decentralized and distributed networks, the client has the opportunity to choose a node through which he can work with the necessary information.

Scaling different databases

The centralized network cannot be expanded significantly.

The centralized model assumes that all clients are connected to the server. The data is stored only on the server. Therefore, all requests for obtaining, changing, adding or deleting information go through main computer... However, server resources are limited. Consequently, it can only work effectively with a certain number of network participants. If there are more clients, the server load may exceed this limit during peak periods. Decentralized and distributed models avoid such problems, since the load is distributed among several computers.

Application of decentralized and distributed databases

Such databases can speed up communication between different parts of the production chain.

Consider the following example. During its service life, a car goes through different stages - assembly, sale, insurance, and so on, up to disposal. At each stage, many different documentation and reports are generated. If it is necessary to obtain any clarification, requests are sent to the relevant authorities. This takes a long time. Physical location, different working languages \u200b\u200band bureaucracy are just some of the challenges.

Blockchain technology avoids all these problems. All information about each vehicle can be stored online. This data cannot be deleted or changed without the consent of the member. And you have access to the information you need at any time. This scheme is implemented in practice by the authors of the CarFix project. Based on the idea of \u200b\u200bsmart contracts, they work to ensure that the entire life path of anyone vehicle logged into the blockchain.

Keep up to date with all the important events of United Traders - subscribe to our

This goes against the basic freedoms of the Internet, such as the ability to navigate to any site at its address (which forces you to publish content only to Facebook) or the ability to index the content of the social network by search engines (not internal Facebook search).

The concept of a decentralized network implies a future in which services such as communication, finance, publishing, social networks, search, archiving, etc., are not provided by centralized platforms that are controlled by an organization, but are managed by people, that is, by a community of users.

The key idea of \u200b\u200bdecentralization is not to trust the management of a particular service to a separate almighty company. Instead, responsibility for running a service becomes a collective affair, perhaps by running on multiple integrated servers or in client-side applications as a fully “distributed” user model.

Even if the community is complex, and its members cannot trust each other, the rules of these decentralized services are thought out in such a way that the participants act fairly in relation to each other, otherwise the service will not work. In order for participants to comply with the rules, cryptographic techniques such as Merkle trees and digital signatures will be used.

The decentralized web clearly outperforms the traditional approach in three fundamental dimensions: privacy, data portability, and security.

  • Confidentiality.Decentralization pays more attention to the privacy of personal data. Data is distributed throughout the network and end-to-end encryption technologies are used to restrict access to them. Data access is controlled solely by the network algorithm, as opposed to more centralized networks, the owner of which usually has access to all data and can influence customer profiles and ad targeting.
  • Data portability.In a decentralized system, users remain the owners of their data and can decide for themselves with whom to share it. Moreover, users retain control over the data as they move from one service provider to another (if the service has a concept of a provider at all). This moment is important. If today I decide to switch from a General Motors car to a BMW, then my driver's license will remain with me. The same applies to chat history and health records.
  • Safety. We live in a world in which the number of threats to our security is only growing. In a centralized system, the more and more valuable the information, the more attractive it is to fraudsters and criminals. The nature of decentralized platforms makes them more resistant to hacking, infiltration, theft and other threats, as they are designed from the outset to operate under the control of society.

In the same way that the very emergence of the Internet led to tremendous changes in its time, when individual local networks were united into a single neutral network, now, thanks to technology, a new common platform for higher-level services is emerging. And, just like the dawn of the Web 2.0 era, the first signs of the Web 3.0 era have been making themselves felt for several years.

An extremely successful version control system has become the fully decentralized Git, which has almost completely replaced centralized systems like Subversion. The example demonstrates that a currency can easily exist without a central issuing authority and successfully competes with a centralized PayPal. Diaspora plans to offer a decentralized alternative to Facebook. Freenet has pioneered decentralized websites, email and file sharing.

The slightly less well-known StatusNet (renamed GNU Social) offers a decentralized alternative. The XMPP service is a decentralized version of messengers like AOL, ICQ, MSN and others.

Telephone center operators, 1914. Source: Flickr / raynermedia

Nevertheless, these technologies have always been somewhere on the edge - they were used only by the geeks who invented them, who could easily overlook the shortcomings of these services for a mass audience. This trend is changing. Society is finally realizing that being completely dependent on a huge platform is not the best option.

A whole generation of startups working on the creation of decentralized services and have already attracted the attention of the industry are seriously called to announce the arrival of a new era.

The network was conceived to be decentralized so that everyone can own their own domain and server, but this is no longer the case. Instead, people’s personal data are now stored in huge arrays along with others. […] In this case, we propose to return to the idea of \u200b\u200ba decentralized network.

Return power to people. We believe that we can make a social revolution with small modifications: we will continue to use the web, but we will do it in such a way that the applications you use will exist separately from your data.

It becomes clear that the main task now - to bring these new technologies to mind and bring them to the mass market. From a commercial point of view, decentralization holds great promise: while current data stores may disappear, new ones will always remain on the surface of new platforms, just as they did when the internet was born.

A pioneer in this regard is: a $ 2 billion company that is a commercial service built on top of Git technology. Its users can retrieve their data and leave the service at any time.

Decentralized Ricochet Internet Network from Lantern
Decentralized Ricochet Network: Internet from the Lantern
Ricochet wireless decentralized network has evolved since 1985
and existed in parallel with the methods of accessing the Internet that we are used to.

In the world of technology, philosophical disputes ( who is first? chicken or egg?) is not the place.
There's always a pioneer, a challenger, a challenge
opening a new direction of movement for the rest.

Now that 3G Internet can be configured by anyone
average cook, and there are Wi-Fi hotspots in metropolitan areas
literally on every corner, it seems incredible what else
fifteen years ago on airborne data transmission for the average consumer
and there could be no question. In those days, there was no wired broadband Internet.
Good old dial-up, rasping sounds of modem protocols
and working in an uncomfortable position (telephone socket in hotel
rooms by Murphy's Law appeared in the farthest corner of the room).

Amazing, but it was at this time that she saw the light and, most importantly,
one of the most interesting technologies wireless
data transmission is a harbinger of today's wireless Internet access.
This technology has a name that resonates like a shot Ricochet.


Past Ricochet

The Ricochet chain has a founding father. And what kind. Paul Baren
American engineer of Polish origin, one of the founders
computer networks with packet switching. Working in a state funded
of the RAND Corporation, Baren comes to the conclusion in the late sixties
on the need to develop computer networks capable of
survivability to withstand a very real nuclear threat at that time.


Data transmission systems were then based on architecture
telephone networks general purpose and had a centralized
(center - telephone exchange) or decentralized (many connected
centers - telephone exchanges) structure. Obviously, even
such reliable way, as a packet data transmission, did not give one hundred percent
package delivery guarantees within a centralized or decentralized network infrastructure.



Paul Baren quite rightly called one
of the founding fathers of the internet. But his strong point has always been wire mesh.

Baren suggested alternative infrastructure that
he called distributed ( distributed). In a distributed network, each
of nodes is a potential router associated with one or more
network nodes. Due to such redundant links, packets in a distributed network
can move along a variety of dynamically generated alternative routes,
which allows the network to function even in conditions
failure of most of its nodes.

Distributed networkfunctioning
according to these principles, is called "mesh (mesh) network".


Baren's proposed distributed (distributed)
the network architecture is one of the classic network architectures.

Baren offered mesh network technology to the main customer
company RAND Corporation - the US Air Force. However, due to
lobbying by the company AT&Tproviding its telecommunications
channels for rent to the military, the project remained a project. True, the works of Baren
network developers are interested ARPANET. Larry Roberts, Internet Chief
in the laboratory DARPA, was impressed by Baren's fault-tolerant network model,
described in his article " On Distributed Communications Networks",
and invited him to the project as an unofficial consultant.

Beren's involvement in creating the first options ARPANET led
to the common misconception that the Internet has a purely military
roots associated with the need to develop a data transmission system,
so tenacious that it is able to easily withstand a nuclear attack from a potential
enemy and function in any critical conditions. By the way, that same great
and mighty Skynetcapturing April 19, 2011 in the film
"Terminator world domination, and there is a highly reliable military mesh network,
built on the basis of the Baren model.

In fact, ARPANET was a purely research project.
This network connected research centers, not military facilities.
ARPANET focuses on data delivery efficiency
between nodes in a reasonable time. Of course, Baren's work related to fault tolerance
networks have significantly influenced the routing methods on the Internet today.
That is why Paul Baren along with Larry Roberts, Leonard Kleinrock
and Joseph Licklider considered one of the founders of the Internet.


Short Flash of Glory:
Developing their ideas for distributed packet switching networks,
Paul Baren co-founded Metricom in 1985.
The purpose of its creation was to develop a data transmission network that does not have a clear
a certain central switching node. This network was designed in the first
queue for the needs of the energy industry, which at that time was trying to reduce the cost of the process
management of such extensive infrastructures as electric and gas networks.

Rent of telephone channels from large American providers
cost a pretty penny, because computers exchanging data constantly
were in touch, which means they were on the channel. That's when Baren's ideas came in handy.
to create a distributed network, the nodes of which independently carry out routing.
To completely abandon the lease of wired channels, such a network was decided
make it wireless. As a protocol basis in Metricom
chose the emerging radio ethernet standard.

During development it became clearthat such a network can become
competitive in the provider services market. To the same thought
investors also came, including one of the founders of Microsoft Paul Allen.
Now Ricochet would be called a network " last mile"because its main
the task was wireless connection user to the Internet or corporate network.

By 1994, all the necessary
samples of equipment suitable for consumer purposes and the company Metricom
officially entered the service market ISP with a commercial network Ricochet.
Expansion of Ricochet started from the town of Cupertino - the very one where the
Apple's headquarters and Metricom's own office.
Distributed network in just one year Ricochet stretched across the north
coast of San Francisco, and a couple of years later she entangled
New York, Los Angeles, Atlanta, Minneapolis, Dallas, Detroit and Miami.



Ricochet network coverage in 1995
any modern telecom operator can envy.

Key components of the Ricochet network - wireless modems,
which subscribers Ricochet received along with the contract. They connected
to the serial port (later to USB) and operated at 900 MHz, providing
reception and transmission of data at a speed of 28.8 kilobits per second
one to five miles away. They contacted the nearest microcell
a radio modem called Poletop Radio.



Poletop Radio - microcellular modems providing interoperability
with many user modems and many similar devices.
They provide intelligent packet routing in the Ricochet network, forming
several alternative transmission routes. Having transferred the package,
these nodes formed a signal ACK (acnowledgment) sent to the previous
in the route node. This signal confirmed the successful transmission of the packet.
Thus, each packet would ricochet back a confirmation of its delivery.
Hence the name of the entire network. Well, these nodes got the name Poletop because
that most often they were attached to lamp posts ( Streetlight pole)
- the most convenient place, of which there are a great many in any city.
This is why the network Ricochet most often it grew along the streets.


All Poletop modems within a radius of ten to twenty miles contacted
with a wired access point - a dedicated server usually located
in one of the municipal buildings. This server provided high-speed
a wired connection to the nearest regional interface for access to IP networks.
Working at 2.4 GHz frequency, Wired Access Point (WAP) provided high
(up to 128 kilobits per second) data exchange rate with multiple Poletop.
A little later, users' modems began to work at the same frequency.




Most often regional communication
the Ricochet network server was located in municipal buildings ( City hall).
Multiple regional interfaces to access IP networks
(NIF - Network Interface Facility) had leased communication channels to:
Internet providers that are partners of Metricom; corporate
subscriber networks Ricochet; control center ( NOC - Network Operations Center)
the most distributed network. The latter not only controlled
the state of all other components of the network, but also contained a nameserver Ricochet,
providing authorization of users connected to the network.

Working at Ricochet was not like any of the technologies
internet access available at the time. In fact the user
turning on a laptop and a wireless Ricochet-modem (it had an autonomous power supply),
could access the network anywhere in the city. His modem was communicating at the nearest Poletop,
which, communicating with neighboring Poletop, formed dynamic
routes of movement of packets to the nearest WAP. Further Ricochet packages
converted into IP packets and moved over leased wired networks.

In the late nineties, the Ricochet chain had over forty thousand subscribers,
despite the high cost of modems (three hundred US dollars), paid (thirty dollars)
registration and rather big (seventy-five dollars) monthly subscription fee.

All these costs are more than were compensated by the opportunity to receive
access to the network anywhere - both within your hometown and on business trips
to other major cities. However, the Ricochet movement covered not only megacities.
The mass of towns in one-story America had the opportunity to create a small Ricochet-structure.



User data packet movement
online Ricochetliterally happened from post to post.

In 1997 Paul Allen becomes the owner of the controlling
shareholding in Metricom. Analysts predict brilliant
the future of a promising and, most importantly, really working technology.
However, in 2001, with more than fifty subscribers
thousand people, company Metricom declares himself bankrupt.

The reason for the bankruptcy? It's all about the wrong marketing policy
chosen by the leadership Metricom... Development Ricochet could not but affect
on the position of traditional ISPs, which quickly adjusted
their tariffs, making them truly popular. Moreover, having understood the prospects
wireless access, most of them began to actively implement Wi-Fi.
The operators did not doze either cellular communicationreceived in the face Ricochet example of organization
wireless data exchange network on the existing infrastructure (lamp posts, municipal premises).

In Metricom did not feel trouble and did not even think to do
equipment and tariffs are cheaper. Alas, the company got carried away with inflating
soap bubble - the troubles of all dotcoms. Funds were invested in "promising"
studies to increase network bandwidth, subscribers and shareholders
reported about taking new high-speed lines and releasing new modems.
I forgot Metricom only report that the last couple of years before bankruptcy
she worked on credit, and this debt grew every day.

The bubble burst ten years ago. For a while the network continued
function by losing subscribers. For several years, its assets were bought
various companies and organizations hoping to revive their former greatness
Metricom at least within several separate cities. In 2004 the company
Terabeam tried to re-deploy the network in major cities. Trying to get bogged down
in bureaucratic correspondence with municipalities and endless
negotiations with regional providers. All this happened in the background
gaining popularity of GPRS access and active development of public outlets Wi-Fi.


On March 28, 2008, the Ricochet network officially ceased to exist.


Ricochet future

Good ideas don't go down the drain. And the Ricochet network at its core
had a great idea. Yes, now the average Internet consumer gets
access to the network not using a modem Ricochetintegrated into his smartphone,
and most often thanks to technology 3G and Wi-Fi... Such success
these technologies are not in last owed the "death" of the Ricochet network.
However, why death? Ricochet, like a well-known politician,
lived, is alive and, I think, will live for a long time.

Judge for yourself. Based on developments Ricochet operates successfully
mass of service data transmission networks. For example,
fire protection systems and access control to protected objects.


If you need to deploy a network infrastructure
in places not equipped with traditional Internet access points
(for example, during rescue operations in hard-to-reach places or when
man-made disasters), the ideas of the Ricochet network become irreplaceable.
There are even projects to deploy ricochet-like networks based on flying robotic drones.



And further. Recently, more and more conversations are being held about
that the near future of wireless Internet access is mesh infrastructure,
deployed on a variety of user devices, which any metropolis is simply flooded with.
So maybe ricochet technology more " ricochet"from the past to the future.

To bookmarks

Shot from the series "Silicon Valley"

June 25 ended the fourth season of the comedy series "Silicon Valley". According to the plot, the main character Richard Hendrix was carried away by the idea of \u200b\u200bcreating a "new Internet", in which devices are connected to each other directly, and not through a server. He is confident that this will help protect confidential data and secure the Internet from a possible "shutdown" by the state or large companies.

Existing problems

For decentralized services, one of the problems lies in the need to encrypt data. In the FireChat messenger, messages can be seen by all users, including intruders or law enforcement agencies. If developers add encryption, it can slow down the application and increase the load on the processors.

From here comes the second problem - the distribution of the decentralized Internet will impose an additional load on the batteries of smartphones, which are still being discharged quite quickly. cloud storage use p2p encryption technology only on computers and laptops, not smartphones. The service encrypts user data, and uses free space on the disk or flash drive of the device to store files.

In addition, the technology described by Hendrix is \u200b\u200bonly possible if a sufficiently large number of users are simultaneously on the network. Otherwise, it will have low bandwidth and poor connection quality. The protocol partly addresses this problem because it allows users to open pages in a browser without having to go to servers. Everyone can provide access to sites and content for another, you do not need to a large number of participants. This also allows you to cope with blocking sites that are impossible in the absence of servers.

The existing services of the decentralized Internet, for example, have little scalability or the impossibility of establishing government or corporate control. The idea raised in the fourth season of "Silicon Valley" can be called realizable - but rather within the framework of separate applications and platforms: the project to create the whole Internet cannot be covered by one team of enthusiasts from California.


Top