Turing edu index php t. Artificial intelligence (AI). Brief theoretical material

The Turing Test, proposed by Alan Turing, was developed as a satisfactory functional definition of intelligence. Turing decided that there was no point in developing an extensive list of requirements for the creation of artificial intelligence, which could also be contradictory, and proposed a test based on the fact that the behavior of an object with artificial intelligence would ultimately be indistinguishable from the behavior of such undeniably intelligent entities like human beings. A computer will pass this test if a human experimenter who asks it written questions cannot determine whether the written answers are from another person or from some device.

Solving the problem of writing a program for a computer to pass this test requires a lot of work. A computer programmed in this way must have the following capabilities:

  • Facilities natural language text processing(Natural Language Processing - NLP), allowing you to successfully communicate with a computer, say in English.
  • Facilities knowledge representation, with the help of which the computer can write into memory what it learns or reads.
  • Facilities automatic generation of logical conclusions, providing the ability to use stored information to find answers to questions and draw new conclusions.
  • Facilities machine learning, which allow you to adapt to new circumstances, as well as detect and extrapolate signs of standard situations.

In the Turing test, direct physical interaction between the experimenter and the computer is deliberately excluded, since the creation of artificial intelligence does not require physical imitation of a person. But in the so-called complete Turing test the use of a video signal is provided so that the experimenter can test the perceptual abilities of the test object, and also have the opportunity to present physical objects “in an incomplete form” (pass them “through shading”). To pass the full Turing test, a computer must have the following abilities:

  • Machine vision for the perception of objects.
  • Facilities robotics for manipulating objects and moving in space.

The six research areas listed in this section make up the bulk of artificial intelligence, and Turing deserves our thanks for providing a test that remains relevant 50 years later. However, artificial intelligence researchers practically do not address the problem of passing the Turing test, believing that it is much more important to study the fundamental principles of intelligence than to duplicate one of the carriers of natural intelligence. In particular, the problem of “artificial flight” was successfully solved only after the Wright brothers and other researchers stopped imitating birds and began studying aerodynamics. In scientific and technical works on aeronautics, the goal of this field of knowledge is not defined as “the creation of machines that in their flight so resemble pigeons that they can even deceive real birds.”

There is probably not a person today who has not at least once heard of such a concept as the Alan Turing test. Most people are probably far from understanding what such a testing system is. Therefore, let us dwell on it in a little more detail.

What is the Turing Test: Basic Concept

Back in the late 40s of the last century, many scientific minds were engaged in the problems of the first computer developments. It was then that one of the members of a certain non-governmental group Ratio Club, engaged in research in the field of cybernetics, asked a completely logical question: is it possible to create a machine that would think like a person, or at least imitate his behavior?

Do I need to say who invented the Turing test? Apparently not. The initial basis of the entire concept, which is still relevant today, was the following principle: will a person, after some time of communication with some invisible interlocutor on completely different arbitrary topics, be able to determine who is in front of him - a real person or a machine? In other words, the question is not only whether a machine can imitate the behavior of a real person, but also whether it can think for itself. this issue still remains controversial.

History of creation

In general, if we consider the Turing test as a kind of empirical system for determining the “human” capabilities of a computer, it is worth saying that the indirect basis for its creation were the curious statements of the philosopher Alfred Ayer, which he formulated back in 1936.

Ayer himself compared, so to speak, the life experiences of different people, and on the basis of this expressed the opinion that a soulless machine would not be able to pass any test, since it could not think. At best, this will be pure imitation.

In principle, this is how it is. Imitation alone is not enough to create a thinking machine. Many scientists cite the example of the Wright brothers, who built the first airplane, abandoning the tendency to imitate birds, which, by the way, was characteristic of such a genius as Leonardo da Vinci.

Istria is silent whether he himself (1912-1954) knew about these postulates, however, in 1950 he compiled a whole system of questions that could determine the degree of “humanization” of the machine. And it must be said that this development is still one of the fundamental ones, although only when testing, for example, computer bots, etc. In reality, the principle turned out to be such that only a few programs managed to pass the Turing test. And then, “pass” is said with great stretch, since the test result has never had an indicator of 100 percent, at best - a little more than 50.

At the very beginning of his research, the scientist used his own invention. It was called the Turing test machine. Since all conversations were to be entered exclusively in printed form, the scientist set several basic directives for writing responses, such as moving the printing tape to the left or right, printing a specific character, etc.

Programs ELIZA and PARRY

Over time, the programs became more complex, and two of them, in situations where the Turing test was applied, showed stunning results at that time. These were ELIZA and PARRY.

As for "Eliza", created in 1960: based on the question, the machine had to determine the key word and based on it create a return answer. This is what made it possible to deceive real people. If there was no such word, the machine returned a generalized answer or repeated one of the previous ones. However, the passage of the Eliza test is still in doubt, since the real people who communicated with the program were initially psychologically prepared in such a way that they thought in advance that they were talking to a person and not to a machine.

The PARRY program is somewhat similar to Eliza, but was created to simulate the communication of a paranoid person. What’s most interesting is that real clinic patients were used to test it. After recording the transcripts of the conversations via teletype, they were assessed by professional psychiatrists. Only in 48 percent of cases were they able to correctly assess where the person was and where the machine was.

In addition, almost all programs of that time worked taking into account a certain period of time, since a person in those days thought much faster than a machine. Now it's the other way around.

Supercomputers Deep Blue and Watson

The developments of the IBM corporation looked quite interesting; they not only thought, but had incredible computing power.

Many people probably remember how in 1997 the supercomputer Deep Blue won 6 chess games against the then current world champion Garry Kasparov. Actually, the Turing test is very conditionally applicable to this machine. The thing is that it initially contained many game templates with an incredible amount of interpretation of the development of events. The machine could evaluate about 200 million positions of pieces on the board per second!

The Watson computer, consisting of 360 processors and 90 servers, won the American television game show, outperforming the other two participants in all respects, for which, in fact, it received a $1 million bonus. Again, the question is moot because the machine was loaded with incredible amounts of encyclopedic data, and the machine simply analyzed the question for the presence of a keyword, synonyms or general matches, and then gave the correct answer.

Eugene Goostman emulator

One of the most interesting developments in this area was the program of Odessa resident Evgeniy Gustman and Russian engineer Vladimir Veselov, now living in the United States, which imitated the personality of a 13-year-old boy.

On June 7, 2014, the Eugene program demonstrated its full capabilities. Interestingly, 5 bots and 30 real people took part in testing. Only in 33% of cases out of a hundred were the jury able to determine that it was a computer. The point here is that the task was complicated by the fact that a child has lower intelligence than an adult, and less knowledge.

The Turing test questions were the most general, however, for Eugene there were also some specific questions about the events in Odessa that could not go unnoticed by any resident. But the answers still made me think that the jury was a child. For example, the program answered the question about place of residence immediately. When the question was asked whether the interlocutor was in the city on such and such a date, the program stated that it did not want to talk about it. When the interlocutor tried to insist on a conversation in line with what exactly happened that day, Eugene disowned himself by saying, they say, you yourself should know, why ask him? In general, the child emulator turned out to be extremely successful.

However, this is still an emulator, not a thinking creature. So the machine uprising will not happen for a very long time.

but on the other hand

Finally, it remains to add that so far there are no prerequisites for creating thinking machines in the near future. Nevertheless, if earlier recognition issues related specifically to machines, now almost every one of us has to prove that you are not a machine. Just look at entering a captcha on the Internet to gain access to some action. So far, it is believed that not a single electronic device has yet been created that can recognize distorted text or a set of characters, except for a person. But who knows, everything is possible...

Artificial intelligence (AI, English: Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

What is artificial intelligence

Intelligence(from Lat. intellectus - sensation, perception, understanding, understanding, concept, reason), or mind - a quality of the psyche consisting of the ability to adapt to new situations, the ability to learn and remember based on experience, understand and apply abstract concepts and use one’s knowledge for environmental management. Intelligence is the general ability to cognition and solve difficulties, which unites all human cognitive abilities: sensation, perception, memory, representation, thinking, imagination.

In the early 1980s. Computational scientists Barr and Fajgenbaum proposed the following definition of artificial intelligence (AI):


Later, a number of algorithms and software systems began to be classified as AI, the distinctive property of which is that they can solve some problems in the same way as a person thinking about their solution would do.

The main properties of AI are understanding language, learning and the ability to think and, importantly, act.

AI is a complex of related technologies and processes that are developing qualitatively and rapidly, for example:

  • natural language text processing
  • expert systems
  • virtual agents (chatbots and virtual assistants)
  • recommendation systems.

National strategy for the development of artificial intelligence

  • Main article: National strategy for the development of artificial intelligence

AI Research

  • Main article: Artificial Intelligence Research

Standardization in AI

2019: ISO/IEC experts supported the proposal to develop a standard in Russian

On April 16, 2019 it became known that the ISO/IEC subcommittee on standardization in the field of artificial intelligence supported the proposal of the Technical Committee “Cyber-physical systems”, created on the basis of RVC, to develop the “Artificial intelligence” standard. Concepts and terminology" in Russian in addition to the basic English version.

Terminological standard “Artificial intelligence. Concepts and terminology" is fundamental to the entire family of international regulatory and technical documents in the field of artificial intelligence. In addition to terms and definitions, this document contains conceptual approaches and principles for constructing systems with elements, a description of the relationship between AI and other end-to-end technologies, as well as basic principles and framework approaches to the regulatory and technical regulation of artificial intelligence.

Following the meeting of the relevant ISO/IEC subcommittee in Dublin, ISO/IEC experts supported the proposal of the delegation from Russia to simultaneously develop a terminological standard in the field of AI not only in English, but also in Russian. The document is expected to be approved in early 2021.

The development of products and services based on artificial intelligence requires an unambiguous interpretation of the concepts used by all market participants. The terminology standard will unify the “language” in which developers, customers and the professional community communicate, classify such properties of AI-based products as “security”, “reproducibility”, “reliability” and “confidentiality”. A unified terminology will also become an important factor for the development of artificial intelligence technologies within the framework of the National Technology Initiative - AI algorithms are used by more than 80% of companies in the NTI perimeter. In addition, the ISO/IEC decision will strengthen the authority and expand the influence of Russian experts in the further development of international standards.

During the meeting, ISO/IEC experts also supported the development of a draft international document Information Technology - Artificial Intelligence (AI) - Overview of Computational Approaches for AI Systems, in which Russia acts as a co-editor. The document provides an overview of the current state of artificial intelligence systems, describing the main characteristics of the systems, algorithms and approaches, as well as examples of specialized applications in the field of AI. The development of this draft document will be carried out by a specially created working group 5 “Computational approaches and computational characteristics of AI systems” within the subcommittee (SC 42 Working Group 5 “Computational approaches and computational characteristics of AI systems”).

As part of their work at the international level, the Russian delegation managed to achieve a number of landmark decisions that will have a long-term effect on the development of artificial intelligence technologies in the country. The development of a Russian-language version of the standard, even from such an early phase, is a guarantee of synchronization with the international field, and the development of the ISO/IEC subcommittee and the initiation of international documents with Russian co-editing is the foundation for further promoting the interests of Russian developers abroad,” he commented.

Artificial intelligence technologies are widely in demand in a variety of sectors of the digital economy. Among the main factors hindering their full-scale practical use is the underdevelopment of the regulatory framework. At the same time, it is the well-developed regulatory and technical framework that ensures the specified quality of technology application and the corresponding economic effect.

In the area of ​​artificial intelligence, TC Cyber-Physical Systems, based on RVC, is developing a number of national standards, the approval of which is planned for the end of 2019 - beginning of 2020. In addition, work is underway together with market players to formulate a National Standardization Plan (NSP) for 2020 and beyond. TC "Cyber-physical systems" is open to proposals for the development of documents from interested organizations.

2018: Development of standards in the field of quantum communications, AI and smart city

On December 6, 2018, the Technical Committee “Cyber-Physical Systems” based on RVC together with the Regional Engineering Center “SafeNet” began developing a set of standards for the markets of the National Technology Initiative (NTI) and the digital economy. By March 2019, it is planned to develop technical standardization documents in the field of quantum communications, and, RVC reported. Read more.

Impact of artificial intelligence

Risk to the development of human civilization

Impact on the economy and business

  • The impact of artificial intelligence technologies on the economy and business

Impact on the labor market

Artificial Intelligence Bias

At the heart of everything that is the practice of AI (machine translation, speech recognition, natural language processing, computer vision, automated driving and much more) is deep learning. It is a subset of machine learning, characterized by the use of neural network models, which can be said to mimic the workings of the brain, so it would be a stretch to classify them as AI. Any neural network model is trained on large data sets, so it acquires some “skills,” but how it uses them remains unclear to its creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such a system AI and can systems built on machine learning be trusted? The implications of the answer to the last question extend beyond the scientific laboratory. Therefore, media attention to the phenomenon called AI bias has noticeably intensified. It can be translated as “AI bias” or “AI bias”. Read more.

Artificial Intelligence Technology Market

AI market in Russia

Global AI market

Areas of application of AI

The areas of application of AI are quite wide and cover both familiar technologies and emerging new areas that are far from mass application, in other words, this is the entire range of solutions, from vacuum cleaners to space stations. You can divide all their diversity according to the criterion of key points of development.

AI is not a monolithic subject area. Moreover, some technological areas of AI appear as new sub-sectors of the economy and separate entities, while simultaneously serving most areas in the economy.

The development of the use of AI leads to the adaptation of technologies in classical sectors of the economy along the entire value chain and transforms them, leading to the algorithmization of almost all functionality, from logistics to company management.

Using AI for Defense and Military Affairs

Use in education

Using AI in business

AI in the fight against fraud

On July 11, 2019 it became known that in just two years artificial intelligence and machine learning will be used to combat fraud three times more often than in July 2019. Such data was obtained during a joint study by SAS and the Association of Certified Fraud Examiners (ACFE). As of July 2019, such anti-fraud tools are already used in 13% of organizations that took part in the survey, and another 25% said that they plan to implement them within the next year or two. Read more.

AI in the electric power industry

  • At the design level: improved forecasting of generation and demand for energy resources, assessment of the reliability of power generating equipment, automation of increased generation when demand surges.
  • At the production level: optimization of preventive maintenance of equipment, increasing generation efficiency, reducing losses, preventing theft of energy resources.
  • At the promotion level: optimization of pricing depending on the time of day and dynamic billing.
  • At the level of service provision: automatic selection of the most profitable supplier, detailed consumption statistics, automated customer service, optimization of energy consumption taking into account the customer’s habits and behavior.

AI in manufacturing

  • At the design level: increasing the efficiency of new product development, automated supplier assessment and analysis of spare parts requirements.
  • At the production level: improving the process of completing tasks, automating assembly lines, reducing the number of errors, reducing delivery times for raw materials.
  • At the promotion level: forecasting the volume of support and maintenance services, pricing management.
  • At the level of service provision: improving planning of vehicle fleet routes, demand for fleet resources, improving the quality of training of service engineers.

AI in banks

  • Pattern recognition - used incl. to recognize customers in branches and convey specialized offers to them.

AI in transport

  • The auto industry is on the verge of a revolution: 5 challenges of the era of unmanned driving

AI in logistics

AI in brewing

AI in the judiciary

Developments in the field of artificial intelligence will help radically change the judicial system, making it fairer and free from corruption schemes. This opinion was expressed in the summer of 2017 by Vladimir Krylov, Doctor of Technical Sciences, technical consultant at Artezio.

The scientist believes that existing solutions in the field of AI can be successfully applied in various spheres of the economy and public life. The expert points out that AI is successfully used in medicine, but in the future it can completely change the judicial system.

“Looking at news reports every day about developments in the field of AI, you are only amazed at the inexhaustible imagination and fruitfulness of researchers and developers in this field. Reports on scientific research are constantly interspersed with publications about new products bursting onto the market and reports of amazing results obtained through the use of AI in various fields. If we talk about expected events, accompanied by noticeable hype in the media, in which AI will again become the hero of the news, then I probably won’t risk making technological forecasts. I can imagine that the next event will be the emergence somewhere of an extremely competent court in the form of artificial intelligence, fair and incorruptible. This will happen, apparently, in 2020-2025. And the processes that will take place in this court will lead to unexpected reflections and the desire of many people to transfer to AI most of the processes of managing human society.”

The scientist recognizes the use of artificial intelligence in the judicial system as a “logical step” to develop legislative equality and justice. Machine intelligence is not subject to corruption and emotions, can strictly adhere to the legislative framework and make decisions taking into account many factors, including data that characterize the parties to the dispute. By analogy with the medical field, robot judges can operate with big data from government service repositories. It can be assumed, that

Music

Painting

In 2015, the Google team tested neural networks to see if they could create images on their own. Then artificial intelligence was trained using a large number of different pictures. However, when the machine was “asked” to depict something on its own, it turned out that it interpreted the world around us in a somewhat strange way. For example, for the task of drawing dumbbells, the developers received an image in which the metal was connected by human hands. This probably happened due to the fact that at the training stage, the analyzed pictures with dumbbells contained hands, and the neural network interpreted this incorrectly.

On February 26, 2016, at a special auction in San Francisco, Google representatives raised about $98 thousand from psychedelic paintings created by artificial intelligence. These funds were donated to charity. One of the most successful pictures of the car is presented below.

A painting painted by Google's artificial intelligence.

FEDERAL AGENCY FOR EDUCATION STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION "VORONEZH STATE UNIVERSITY" T.K. Katsaran, L.N. Stroeva TURING MACHINE AND RECURSIVE FUNCTIONS Textbook for universities Publishing and Printing Center of Voronezh State University 2008 Approved by the scientific and methodological council of the faculty of PMM on May 25, 2008, protocol No. 9 Reviewer Doctor of Technical Sciences, Prof. Department of Mathematical Methods for Operations Research T.M. Ledeneva The textbook was prepared at the Department of Nonlinear Oscillations, Faculty of Mechanical Mathematics, Voronezh State University. Recommended for 1st year students of the Faculty of Applied Mathematics and Mathematics of VSU, Starooskolsky and Liskinsky branches of VSU. For specialty 010500 – Applied mathematics and computer science INTRODUCTION The word “algorithm” comes from algorithmi – the Latin spelling of the name of the Uzbek mathematician and astronomer who lived in the 8th–9th centuries (783–850), Muhammad ben Musa al-Khwarizmi. The greatest mathematician from Khorezm (a city in modern Uzbekistan) was known under this name in Medieval Europe. In his book “On Indian Counting,” he formulated the rules for writing natural numbers using Arabic numerals and the rules for operating on them. Then the concept of an algorithm began to be used in a broader sense and not only in mathematics. For both mathematicians and practitioners, the concept of an algorithm is important. Another thing is the implementation of an existing algorithm. It can be entrusted to a subject or object who is not obliged to delve into the essence of the matter, and perhaps is not able to understand it. Such a subject or object is usually called a formal performer. An example of a formal performer is an automatic washing machine, which strictly performs the actions prescribed to it, even if you forgot to put powder in it. A person can also act as a formal performer, but first of all, various automatic devices, including a computer, are formal performers. Each algorithm is created with a very specific performer in mind. Those actions that the performer can perform are called his permissible actions. The set of permissible actions forms a system of performer commands. The algorithm must contain only those actions that are permissible for a given performer. When searching for solutions to some problems, it took a long time to find the appropriate algorithm. Examples of such problems are: a) indicate a method according to which, for any predicate formula, in a finite number of operations one can find out whether it is identically true or not; b) is the Diophantine equation (an algebraic equation with integer coefficients) solvable in integers? Since it was not possible to find algorithms for solving these problems, the assumption arose that such algorithms do not exist at all, which was proven: the first problem was solved by A. Church, and the second by Yu.V. Matiyasevich and G.V. Chudnovsky. It is in principle impossible to prove this using the intuitive concept of an algorithm. Therefore, attempts have been made to give a precise mathematical definition of the concept of an algorithm. In the mid-30s of the twentieth century, S.K. Kleene, A.A. Markov, E. Post, A. Turing, A. Church and others have proposed various mathematical definitions 5 of the concept of algorithm. Subsequently, it was proven that these different formal mathematical definitions are in some sense equivalent: they calculate the same set of functions. This suggests that the main features of the intuitive concept of an algorithm appear to be correctly captured in these definitions. Only one symbol (letter) from the external alphabet A = (L, a1 , a 2 ,..., a n -1 ), n ³ 2 can be written into cells at a discrete time. An empty cell is designated by the symbol L, and the symbol L itself is called empty, while the remaining symbols are called non-blank. In this alphabet A, the information that is supplied to the MT is encoded in the form of a word (a finite ordered set of symbols). The machine “processes” information presented in the form of a word into a new word. The expressions qi ai and a j D q j are called the left and right sides of this command, respectively. The number of commands in which the left-hand sides are pairwise distinct is a finite number, since the sets Q \ (q 0 ) and A are finite. 10

We see it all the time. “RESTful” this, “REST” protocol that, etc. However, a lot of us don’t really understand exactly what it means. We’ll fix exactly that gap in this article!

State

In computer science (and mathematics, to some extent), there is a concept of state. A certain system can be in state A or it can be in state B or it can be a bunch of other states (usually, a finite list of states).

As an example, let us say that you write a program which turns the screen red if the temperature is more than 80 degrees farenheit or turns it blue if the temperature is less than 80 degrees farenheit.

We can call the first state (temp > 80 degrees) state “A” and the second state (temp< 80 degrees) state “B”. We’ll call the middling state (temp = 80 degrees) state “C”. As we can see, we have defined behaviours of the programs at state A and state B, but not at state C.

That is the general idea of ​​state. Here’s the definition given by the all-knowing Wikipedia:

“In computer science and automata theory, the state of a digital logic circuit or computer program is a technical term for all the stored information, at a given point in time, which is used by the circuit or program.”

In short, it is a sort of “combination” of all the information that the program takes into account.

Now, we’re going to make a leap into something that’s considered largely theoretical and useless and then somehow connect to the very practical world of REST.

Turing machines

There was this man named Alan Turing, and he was quite a smart mathematician with an interest in the workings of computers. He conceived an imaginary computer (which, as we will see, is actually impossible to actually build) which he used to reason about things that happen in real computers.

The computer consists of a tape of infinite length, with a head through which the tape passes. The tape has “cells”, to which information can be written (numbers, colors, etc.) The tape moves through this machine, left or right, one cell at a time. The machine scans in whatever is already on the tape and, depending on what state it is in, it writes something back onto the tape then, subsequently, changes its state.

You may want to read that definition again for it to sink in completely. The essence of the idea is that it makes operations on memory based on state and states changes according to operations on memory.

This seems like a fairly useless theoretical fantasy (although, there is absolutely nothing wrong with that, read A Mathematician’s Apology if you’re wondering why people care to take an interest in theoretical stuff). However, it is quite possibly the most fundamental concept of computer science.

Using the Turing Machine, computer scientists are able to reason about any algorithm that can be run on a computer. In fact, the Turing machine has led to many advances in computer science.

So, the Turing Machine shows us that today (note: I’m not considering graph reduction processors), literally everything in computer science is based off the idea of ​​state.

The Network

Then came along the concept of the internet. This is a place where packets can go haywire and are resent until they reach their destination. In general, there is never a complete transmission of full data.

Clients come in quickly and leave twice as fast. As such, holding onto client state doesn’t make much sense when working on a network system.

Also, in general, holding too much state in a system has been a cause for major pain (which also leads to funcitonal programming languages, which do not allow for the holding of state in a large portion of your code).

Now, people needed a network protocol for the internet which was simple, fast and handled management of state well in an extremely dynamic environment.

That protocol was HTTP, and the theory that grew out of the work on HTTP was labeled REST.

REST

REST stands for: REpresentational State Transfer. It is a system built around the “client-server” concept that networks are built on top of (well, ther are also peer to peer type networks, but, client-sever is arguably the simplest and most tested architecture). The name itself is slightly misleading, since the server is completely state-free!

There are a few constraints that all systems that claim to be “RESTful” must follow.

First of all, it must be a client-server system. This constraint has been modified in the past, but in the formal definition (and, for the theory of REST to work properly), we have servers to which clients can connect.

The server is fully stateless. This means that for each client request to the server, no state is reserved on the server. For example, if a client (using HTTP), requests the index page and subsequently requests the /user/home page, the two requests are completely independent of each other. The client holds all of the state of the system (hence, we have the back button).

Responses from the server must be cacheable, which means that there must be a system on the server which identifies responses as cacheable or not, so that clients (e.g. web browsers) can take advantage of the cache.

Finally, we must have a simple, clean, and uniform interface. If you have had some experience with how HTTP works, the request forms are very simple. They are largely verbs and nouns with a very easy to parse and human-readable format. SOAP is another form of a network protocol which absolutely obliterates this requirement and, therefore, is often very difficult to work with.

Now, what do all of these properties, when taking together, entail?

Implications of REST

They let us build systems that are cleanly seperated, scalable, and debugged easily.

Let us consider the restriction of statelessness on the server first, it may well be the most important (and, also, the most often violated by so-called RESTful architectures) bit.

As previously mentioned, in a client-server architecture the clients are very nimble; they fire requests and suddenly kill all communication. In this kind of a situation, it is much cleaner to not save state about the client on the server. This lets us reason about HTTP servers very easily; each client request may be treated as if it is a completely new client, and it wouldn’t make a penny of a difference.

Secondly, cacheable responses mean that the clients are able to get data much faster, because often times, the data can be retrieved from the client’s memory. This immediately increases client performance and, in some systems, server scalability.

The uniform interface may just be the most important. Having worked with SOAP, I can tell you that HTTP’s simple format is a blessing when trying to debug server code and the same applies to other RESTful systems.

Designing a RESTful System

Let’s say we need to write a server that returns stock quotes, as well as retaining a list of stock quotes to monitor (and, the user can either add to or clear the list).

This is an excellent case for a RESTful system, because it is inherently independent of identity of client. Therefore, the server needs not hold any state about the client.

We first start by defining how the protocol language will work (a bit like the GET and POST verbs of HTTP):

GETQUOTE ticker- gives the price for a particular stock
ADDTICKER ticker- adds the given stock to the list
GETLIST - gets a comma seperated list of stocks in the list

That’s a fairly simple protocol, and, doesn’t hold any state on the server. Now, as for caching, we may say that we update the prices every hour, so caches more than one hour old may be thrown away.

And, that’s all there is to it! Of course, we still have to implement this, but, the general idea of ​​the system is quite simple and clean!

Conclusion

Hopefully, this article gave you a solid understanding of REST.

Also, I hope that you will now be able to call out people who throw around the term “RESTful” too much - I can tell you, there’s a lot of them!



 
Articles By topic:
Nobody's thing.  Sean Tan.  Nobody's Thing Books by Sean Tan, translated into Russian
“I guess you want to know what this book is about just by reading the cover? I get it—everyone is short on time, there’s a lot to do, and most of us probably have more important things to do than look at a picture book about a giant red thing, lost.
Artificial intelligence (AI)
The Turing Test, proposed by Alan Turing, was developed as a satisfactory functional definition of intelligence. Turing decided that there was no point in developing an extensive list of requirements needed to create artificial intelligence.
By the nature of the information stored
Database For example: Factual documents presented in a strictly defined format.
Documentary ) Information storage input procedure search - the process of processing a request;
processing information output