Buffering Techniques
The two common arrangements for buffering data are the pooled buffer and the circular buffer. In the pool buffer, multiple buffers are allocated, with the buffer size being equal to the size of one data record. As each data record is received, it is copied to a free buffer from the pool. When it is time to remove data from the buffer for processing, data is read from the buffers in the order in which it had been stored (first in, first out, or FIFO). As a buffer is read, it is marked as free so it can be used for more incoming data. In the circular buffer there is only a single buffer, large enough to hold a number of data records. The buffer is set up as a queue (see queue) to which incoming data records are written and from which they are read as needed for processing. Because the queue is circular, there is no “first” or “last” record. Rather, two pointers (called In and Out) are maintained. As data is stored in the buffer, the In pointer is incremented. As data is read back from the buffer, the Out pointer is incremented. If either pointer reaches around back to the beginning, it begins to wrap around. The software managing the buffer must make sure that if the In pointer goes past the Out pointer, then the Out pointer must not go past In. Similarly, if Out goes past In, then In must not go past Out. The fact that programmers sometimes fail to check for buffer overflows has resulted in a seemingly endless series of security vulnerabilities, such as in earlier versions of the UNIX sendmail program. In one technique, attackers can use a too-long value to write data, or worse, commands into the areas that control the program’s execution, possibly taking over the program (see also computer crime and security). Buffering is conceptually related to a variety of other techniques for managing data. A disk cache is essentially a special buffer that stores additional data read from a disk in anticipation that the consuming program may soon request it. A processor cache stores instructions and data in anticipation of the needs of the CPU. Streaming of multimedia (video or sound) buffers a portion of the content so it can be played smoothly while additional content is being received from the source. Depending on the application, the buffer can be a part of the system’s main memory (RAM) or it can be a separate memory chip or chips onboard the printer or other device. Decreasing prices for RAM have led to increases in the typical size of buffers. Moving data from main memory to a peripheral buffer also facilitates the multitasking feature found in most modern operating systems, by allowing applications to buffer their output and continue processing.