Definition of

video card

video card

A video card processes data and converts it into information that monitors and televisions can represent.

A video card , also called a graphics card (among other names), is responsible for processing the data that comes from the main processor (CPU or CPU) to convert it into information that can be represented on devices such as monitors and the televisions. It is worth mentioning that this component can have a wide variety of architectures, although they are commonly called the same, even if we are talking about a video chip integrated into a motherboard ; In the latter case, it is more correct to say GPU (Graphic Processing Unit).

Since their inception, graphics cards have included various features and functions, such as the ability to tune in to television or capture video streams from an external device. It is important to note that this is not a component found exclusively in current computers , but rather they have existed for more than four decades and today they are also an indispensable part of video game consoles , both portable and home consoles. .

Origins of the video card

Its creation dates back to the end of the 1960s, a time when the use of a printer to view computer activity was left behind and monitors began to be used. At first, the resolutions were tiny compared to the high definition already known to everyone. It was thanks to Motorola's research and development work that the characteristics of the chips became more complex and its products gave rise to the standardization of the name video cards.

As computers for personal use and the first video game consoles became popular, it was decided to integrate graphics chips into motherboards , since this allowed manufacturing costs to be considerably reduced. At first glance, this presents a clear disadvantage: the impossibility of updating the equipment ; However, they were closed systems , which were built taking into consideration each and every one of their components, so that the final product was consistent and offered the highest possible performance.

It should be noted that to this day this continues to happen with consoles, and it is thanks to this type of unalterable design that after a few years the developers obtain results much superior to the first experiments; This is not possible on a PC, no matter how powerful, since a software company cannot consider all possible combinations of its consumers' machines. Furthermore, the architecture of a computer has weak points precisely because its parts are interchangeable , the most notable being the distance between the memory , the graphics card and the main processor.

Graphics

The evolution of graphics cards made it possible to multiply the number of colors that are represented simultaneously.

The evolution

In the early 1980s, IBM built on the design of the unforgettable Apple II and made the interchangeable video card popular, although in its case it only offered the ability to display characters on the screen . It was an adapter with a modest amount of 4KB of memory (currently they can have 2GB, 512 times more) and was used with a monochrome monitor. This was the starting point, and the improvements did not take long to come.

Some time later, IBM standardized the term VGA , which refers to a video card technology capable of offering a resolution of 640 pixels wide by 480 high, as well as the monitors that could represent these images and the connector necessary for them. use. After the work of several companies dedicated exclusively to graphics, Super VGA (also known as SVGA ) saw the light of day, increasing the available definition (to 1024 x 768) as well as the number of colors that could be represented simultaneously ( from 16 colors at 640 x 480 to 256 at 1024 x 768).