Tech Terms

This is the right place where you find technical definitions of tech & ICT terms with over 80% accuracy rate. Use the search bar to find words quickly…

3D Printer

3D Printer A 3D printer is a computer-aided manufacturing (CAM) device that creates three-dimensional objects. Like a traditional printer, a 3D printer receives digital data from a computer as input. However, instead of printing the output on paper, a 3D printer builds a three-dimensional model out of a custom material. 3D printers use a process called additive manufacturing to form (or "print") physical objects layer by layer until the model is complete. This is different than subtractive manufacturing, in which a machine reshapes or removes material from an existing mold. Since 3D printers create models from scratch, they are more efficient and produce less waste than subtractive manufacturing devices. The process of printing a 3D model varies depending on the material used to create the object. For example, when building a plastic model, a 3D printer may heat and fuse the layers of plastic together using a process called fused deposition modeling (FDM). When creating a metallic object, a 3D printer may use a process called direct metal laser sintering (DMLS). This method forms thins layers of metal from metallic powder using a high powered laser. While 3D printing has been possible since the 1980s, it has been primarily used for large scale industrial purposes. However, in recent years, 3D printers have become much cheaper and are now available to the consumer market. As the technology becomes more widespread, 3D printers may become a viable means for people to create their own home products and replacement parts. Updated January 2, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/3d_printer Copy

3G

3G 3G is a collection of third generation cellular data technologies. The first generation (1G) was introduced in 1982, while the second generation of cellular data technologies (2G) became standardized in the early 1990s. 3G technologies were introduced as early as 2001, but did not gain widespread use until 2007. In order to be labeled "3G," a cellular data transfer standard must meet a set of specifications defined by the International Telecommunications Union, known as IMT-2000. For example, all 3G standards must provide a peak data transfer rate of at least 2 Mbps. Most 3G standards, however, provide much faster transfer rates of up to 14.4 Mbps. While many cell phone companies market phones with "3G technology," there is no single 3G standard. Rather, different companies use their own technologies to achieve similar data transfer rates. For example, AT&T uses a 3G technology based on GSM, while Verizon uses a technology based on CDMA. Additionally, cell phone networks outside the United States use different IMT-2000 compliant standards to achieve 3G data transfer speeds. 3G precedes 4G, the fourth generation of cellular data technologies. Updated June 7, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/3g Copy

403 Error

403 Error A 403 error is an HTTP error code that indicates access to a specific URL is forbidden. Websites often display 403 errors with a generic message such as, "You don't have permission to access this resource." There are several reasons a web server may produce a 403 forbidden error. Some of the most common include: A missing index page An empty directory Invalid folder permissions Invalid file permissions Invalid file ownership When you access a website directory on an Apache web server (a URL ending with a forward slash "/"), the standard behavior is to display the contents of the index page (index.php, index.asp, etc). If no index page is available, the fallback option is to list the contents of the directory. However, for security purposes, many web servers are configured to disallow directory listings. Therefore, if you access an empty directory or a folder without an index page, you may receive a 403 error because the directory listing is forbidden. Invalid file and folder permissions can also produce 403 errors. Generally, files and folders on web servers must have the following permissions enabled: Files Owner: read, write Group: read Everyone: read Folders Owner: read, write, execute Group: read, execute Everyone: read, execute If a specific file or the parent folder has incorrect permissions, the web server may be unable to access it, producing a 403 forbidden error. Similarly, if a file's "owner" does not match the corresponding website user account, it may generate a 403 error. Ownership discrepancies can occur when files are transferred between accounts on a web server or when a file is uploaded by another user. In some cases, access to a specific URL is intentionally forbidden for security reasons. In other cases, a forbidden error may be caused by an accidental website misconfiguration. If you unexpectedly receive a 403 error in your web browser, you can contact the webmaster of the corresponding website and provide the URL that produced the error. Updated June 19, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/403_error Copy

404 Error

404 Error A 404 error is a common website error message that indicates a webpage cannot be found. It may be produced when a user clicks an outdated (or "broken") link or when a URL is typed incorrectly in a Web browser's address field. Some websites display custom 404 error pages, which may look similar to other pages on the site. Other websites simply display the Web server's default error message text, which typically begins with "Not Found." Regardless of the appearance, a 404 error means the server is up and running, but the webpage or path to the webpage is not valid. So why call it a "404 error" instead of simply a "Missing Webpage Error?" The reason is that 404 is an error code produced by the Web server when it cannot find a webpage. This error code is recognized by search engines, which helps prevent search engine crawlers from indexing bad URLs. 404 errors can also be read by Web scripts and website monitoring tools, which can help webmasters locate and fix broken links. Other common Web server codes are 200, which means a webpage has been found, and 301, which indicates a file has moved to a new location. Like 404 errors, these status messages are not seen directly by users, but they are used by search engines and website monitoring software. Updated October 22, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/404error Copy

4G

4G is a collection of fourth generation cellular data technologies. It succeeds 3G and is also called "IMT-Advanced," or "International Mobile Telecommunications Advanced." 4G was made available as early as 2005 in South Korea under the name WiMAX and was rolled out in several European countries over the next few years. It became available in the United States in 2009, with Sprint being the first carrier to offer a 4G cellular network. All 4G standards must conform to a set of specifications created by the International Telecommunications Union. For example, all 4G technologies are required to provide peak data transfer rates of at least 100 Mbps. While actual download and upload speeds may vary based on signal strength and wireless interference, 4G data transfer rates can actually surpass those of cable modem and DSL connections. Like 3G, there is no single 4G standard. Instead, different cellular providers use different technologies that conform to the 4G requirements. For example, WiMAX is a popular 4G technology used in Asia and Eastern Europe, while LTE (Long Term Evolution) is more popular in Scandinavia and the United States.

4K

4K 4K is a display standard that includes televisions, monitors, and other video equipment that supports a horizontal resolution of roughly 4,000 pixels. The most common 4K standard is Ultra HD (or UHD), which has a resolution of 3840 x 2160 pixels (3,840 pixels wide by 2,160 pixels tall). It is exactly twice the resolution of HDTV (1920 x 1080) and has an identical 16 x 9 aspect ratio. Mass production of 4K televisions began in 2013. Sony, Panasonic, Samsung, LG, Sharp, and other manufacturers now offer 4K televisions alongside their HDTV lineups. Several companies released high-resolution video capture devices prior to 2013 so that 4K video content would be available for the new TVs. For example, Canon, JVC, and other companies released 4K digital video cameras in 2012. RED released the RED ONE in 2007, which paved the way for the 4K devices that followed in the next few years. While 4K is often used in reference to television, it can also refer to high-resolution computer monitors. For example, several hardware manufacturers now offer 4K displays, which may also be called Hi-DPI monitors or retina displays. The most popular Hi-DPI monitors resolution is 3840 x 2160, though some displays have a wider aspect ratio and a resolution of 4096 x 2160. NOTE: 4K televisions may also be called UHDTVs (as opposed to HDTVs). However, the phrase "4K television" is more commonly used in marketing. Updated October 9, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/4k Copy

5G

5G 5G is the fifth generation of cellular data technology. It succeeds 4G and related technologies, including LTE. The first 5G cellular networks were constructed in 2018, while 5G devices became widespread in 2019 and 2020. 5G vs 4G Benefits of 5G include faster speeds, low latency, and greater capacity. The theoretical maximum data transfer rate of 5G is 20 Gbps (2.5 gigabytes per second). That is 20x faster than LTE-Advanced, which has a peak download speed of 1,000 Mbps. 5G latency (the time to establish a connection) is estimated to be 10 to 20 milliseconds, compared to 4G's average latency of 40 ms. The maximum traffic capacity of 5G is roughly 100x greater than a typical 4G network. 4G smartphones and other devices are not compatible with 5G transmitters. Therefore, most 4G towers will be operational for several years after the rollout of 5G networks. 5G Multiband Compared to previous cellular technologies, 5G uses a wider range of frequency bands. Instead of broadcasting all signals at a low frequency, 5G supports multiple frequencies that can be optimized for different areas. For example, low frequencies travel further but do not provide high data transfer rates. These are ideal for rural areas with a long distance between cellular towers. High frequencies have limited range and are highly impacted by physical barriers, but they support faster speeds. These are ideal for densely populated areas with large numbers of cell towers. 5G frequency bands are separated into three categories: Low-band - broadcasts at low frequencies between 600 and 1,000 MHz, providing download speeds in the range of 30 to 250 Mbps. Both the frequency and range are similar to a 4G signal. Mid-band - broadcasts between 1 and 6 GHz and provides speeds of 100 to 900 Mbps. The range of each cell tower is several miles radius. Mid-band is expected to be the most widely deployed. High-band - broadcasts at ultra-high frequencies between 25 and 40 GHz. The short "millimeter waves" provide data transfer speeds above one gigabit per second. High-band 5G has a limited range of about one mile. It is ideal for city centers, sports stadiums, and other public gathering areas. NOTE: While the maximum data transfer rates of 5G are much higher than 4G, actual speeds depend on the quality of the signal. The speed of high-band 5G signal, for instance, may drop by 50% if you step behind a building or walk a block away from the closest cell tower. With mid and high-band 5G, it is especially important to have a strong signal. Updated May 18, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/5g Copy

802.11a

802.11a 802.11a is an IEEE standard for transmitting data over a wireless network. It uses a 5 GHz frequency band and supports data transfer rates of 54 Mbps, or 6.75 megabytes per second. The 802.11a standard was released in 1999, around the same time as 802.11b. While 802.11b only supported a data transfer rate of 11 Mbps, most routers and wireless cards at that time were manufactured using the 802.11b standard. Therefore, 802.11b remained more popular than 802.11a for several years. In 2003, 802.11g replaced both of the previous standards. 802.11g supports transfer rates of up to 54 Mbps (like 802.11a), but uses the same 2.4 GHz band as 802.11b. NOTE: In order for an 802.11a connection to work, each device on the wireless network must support the 802.11a standard. For example, if a base station broadcasts an 802.11a signal, only computers with Wi-Fi cards that support 802.11a will be able to communicate with it. While many routers are backward-compatible with older standards, some must be manually configured to work with older 802.11a and 802.11b devices. Updated December 12, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/80211a Copy

802.11ac

802.11ac 802.11ac (also called 5G Wi-Fi) is the fifth generation of Wi-Fi technology, standardized by the IEEE. It is an evolution of the previous standard, 802.11n, that provides greater bandwidth and more simultaneous spatial streams. This allows 802.11ac devices to support data transfer rates that are several times faster than those of 802.11n devices. Unlike previous Wi-Fi standards, which operated at a 2.4 GHz frequency, 802.11ac operates exclusively on a 5 GHz frequency band. This prevents interference with common 2.4 GHz devices, such as cordless phones, baby monitors, and older wireless routers. Computers and mobile devices that support 802.11ac will benefit from the 5 GHz bandwidth, but older wireless devices can still communicate with with an 802.11ac router at a slower speed. The initial draft of the 802.11ac standard was approved in 2012, but 802.11ac hardware was not released until 2013. The initial 802.11ac standard (wave 1) supports a maximum data transfer rate of 1300 Mbps, or 1.3 Gbps, using 3 spatial streams. This is significantly faster than 802.11n's maximum speed of 450 Mbps. It also means 802.11ac is the first Wi-Fi standard that has the potential to be faster than Gigabit Ethernet. The second 802.11ac standard (wave 2) will support twice the bandwidth of wave 1 devices and offer data transfer rates of up to 3470 Mbps. Updated July 24, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/80211ac Copy

802.11b

802.11b 802.11b is one of several Wi-Fi standards developed by the Institute of Electrical and Electronics Engineers (IEEE). It was released in 1999 along with 802.11a as the first update to the initial 802.11 specification, published in 1997. Both 802.11a and 802.11b are wireless transmission standards for local area networks, but 802.11a uses a 5 GHz frequency, while 802.11b operates on a 2.4 GHz band. The 802.11b Wi-Fi standard provides a wireless range of roughly 35 meters indoors and 140 meters outdoors. It supports transfer rates up to 11 Mbps, or 1.375 megabytes per second. In the late 1990s, this was significantly faster than Internet speeds available to most homes and businesses. Therefore, the speed was typically only a limitation for internal data transfers within a network. While 802.11b provided similar data transfer rates as 10Base-T Ethernet, it was slower than newer wired LAN standards, such as 100Base-T and Gigabit Ethernet. In 2003, the IEEE published the 802.11g standard, which provides wireless transfer rates of up to 54 Mbps. 802.11g consolidated the previous 802.11 "a" and "b" specifications into a single standard that was backward-compatible with 802.11b devices. Most Wi-Fi devices used 802.11g throughout the 2000s until the 802.11n standard was published in 2009. Updated March 20, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/80211b Copy

802.11g

802.11g 802.11g is one of several Wi-Fi standards developed by the Institute of Electrical and Electronics Engineers (IEEE) for transmitting data over a wireless local area network. It launched in 2003 as a replacement for the previous generations, 802.11a and 802.11b. It operates on the 2.4 GHz microwave radio band and supports data transfer speeds up to 54 Mbps. It is now also known as Wi-Fi 3. An 802.11g Wi-Fi network provides roughly the same wireless range as a previous-generation 802.11b network, approximately 38 meters or 125 feet indoors. The standard also maintains backward compatibility with 802.11b, allowing an 802.11b device to connect to a network created by an 802.11g router. However, any 802.11b devices on a network will limit every other Wi-Fi device to 11 Mbps (the maximum speed of an 802.11b connection). The 802.11g standard gained popularity along with the rise of home broadband Internet. It offered data transfer rates that, while only half the speed of 100 Mbps Ethernet, were fast enough to keep up with the DSL and cable Internet connections of the time. The next generation of Wi-Fi, 802.11n, arrived in 2009 and offered better speeds and less interference by operating across both the 2.4GHz and 5GHz bands. Updated October 18, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/80211g Copy

802.11n

802.11n 802.11n (retroactively named Wi-Fi 4) is a Wi-Fi standard developed by the IEEE and officially published in 2009. It features a longer range and faster data transfer rates when compared to the previous standard, 802.11g. It is the first Wi-Fi standard to support both the 2.4 GHz and 5 GHz frequency bands and is capable of using both at the same time while operating in dual-band mode. 802.11n is the first generation of Wi-Fi to support MIMO (multiple in, multiple out) data transfers. MIMO increases both a network's range and its total throughput by allowing a router to use up to 4 antennas at once. This spreads a single Wi-Fi signal across several channels, utilizing more of the available wireless radio spectrum. Under ideal conditions, an 802.11n router or access point can transmit data up to 600 Mbps, far faster than 802.11g's maximum of 54 Mbps. However, since the 5 GHz band is more susceptible to interference from walls and other physical objects, real-use speeds are often between 100 and 200 Mbps. A Netgear 802.11n router featuring multiple antennas 802.11n uses the same WPA and WPA2 security methods as 802.11g. It is backward compatible with devices from previous generations of Wi-Fi, allowing you to use older 802.11g devices on an 802.11n network. When running in dual-band mode, a router can communicate with 802.11g devices using the slower 2.4 GHz band while reserving the faster 5 GHz band for newer 802.11n devices. Updated November 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/80211n Copy

Abend

Abend Short for "Abnormal End." An abend (sometimes capitalized ABEND) is an error that causes an unexpected or abnormal end of a process. It is typically the result of a flaw in a programming instruction that the computer cannot understand. The effect of an abend can be the affected application unexpectedly quitting or becoming unresponsive, sometimes requiring a reboot to resolve. An abend error indicates that the crash stems from a problem in software rather than a hardware fault. Software programs and operating systems can both be susceptible to abend errors. The term "abend" comes from the IBM OS/360 database systems of the 1960s, where it was a common error code. Some causes of abend errors were dividing by zero, running out of storage space, or pointing to a memory address that the computer cannot access. The error code is also present in Novell Netware systems. Over time it has become a general programming term for a software-related crash. Updated October 25, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/abend Copy

Abstraction

Abstraction An abstraction is a general concept or idea, rather than something concrete or tangible. In computer science, abstraction has a similar definition. It is a simplified version of something technical, such as a function or an object in a program. The goal of "abstracting" data is to reduce complexity by removing unnecessary information. At some level, we all think of computers in abstract terms. When we type a document in a word processor, we don't think of the CPU processing each letter we type and the data being saved to memory. When we view a webpage, we don't think of the binary data being transferred over the Internet and being processed and rendered by the web browser. We simply type our documents and browse the web. This is how we naturally abstract computing concepts. Even highly technical people, such as software developers can benefit from abstraction. For instance, one of the key benefits of object-oriented programming is data abstraction. It transforms complex entities into simplified objects, which can be accessed and altered within a program. These objects, which are often called classes, may have multiple attributes and methods. By consolidating these items into a single object, it makes it easier for programmers to access and manage data within a program. Updated April 19, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/abstraction Copy

Accelerometer

Accelerometer An accelerometer is a sensor that measures vibration or acceleration forces on an object. An accelerometer can detect both static forces, like gravity, and dynamic forces, like vibrations and changes in acceleration and direction. Small accelerometers are integrated into smartphones, fitness trackers, game controllers, and other electronic devices to help them to detect movement and orientation. The most common type of accelerometer used in electronic devices is a capacitive accelerometer. This type of sensor includes a small mass connected to springs (or other flexible supports) and suspended between fixed capacitive plates. As the device moves, this small mass moves on its springs. This movement changes the distance between the fixed plates and the mass, resulting in a measurable change in electrical energy. The accelerometer converts this change into motion data, including the direction and magnitude of acceleration. An iPhone can use its accelerometer to function as a level Many different kinds of electronic devices integrate accelerometers for motion tracking. For example, smartphones use them to know when to rotate the screen and for other forms of motion control. Cars and trucks include accelerometers to detect a rapid deceleration resulting from a crash to deploy airbags and other safety features. Airplanes use accelerometers paired with gyroscopes to measure and maintain orientation and help with navigation. Smartwatches and fitness trackers use them to count steps taken and distance traveled during workouts. Finally, game controllers can use accelerometers for motion control in gaming, first popularized with the Nintendo Wii. NOTE: Accelerometers are one of many kinds of microelectromechanical systems (MEMS) found in a smartphone. Other MEMS sensors include gyroscopes, proximity sensors, barometers, compasses, and ambient light sensors. Updated June 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/accelerometer Copy

Access

Access Access is a database management system (DBMS) application developed by Microsoft. It is part of the Microsoft 365 software suite, formerly known as Office. Access helps users create relational databases that store information across an organized structure of linked tables, and provides an easy graphical user interface (GUI) for creating forms, queries, and reports without requiring advanced programming skills. Small and medium businesses typically use Access databases to track product inventory, sales orders, customer data, and many other types of information. They can use Access to create macros to record repetitive actions and automate them later, or even write modules using Visual Basic for Applications (VBA) to create complex automations and applications. Access works alongside other Microsoft 365 productivity apps like Excel to share data across document types. For example, you can import sales records into an Excel file to create charts and spreadsheets; you can also import customer names and addresses into Word as part of a mail merge to create customized form letters and mailing labels. A new Contacts List database in Access The Access database file format is best for small- to medium-sized databases. Performance can suffer if an Access database grows too large or too many people try to access it at once. However, Access can act as a front-end for other types of databases using Open Database Connectivity (ODBC) plug-ins. ODBC allows Access to read and write data to other systems like Microsoft SQL Server, MySQL, and Oracle Database that can handle large-scale databases more efficiently. File extensions: .ACCDB, .MDB Updated November 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/access Copy

Access Point

Access Point An access point is a device, such as a wireless router, that allows wireless devices to connect to a network. Most access points have built-in routers, while others must be connected to a router in order to provide network access. In either case, access points are typically hardwired to other devices, such as network switches or broadband modems. Access points can be found in many places, including houses, businesses, and public locations. In most houses, the access point is a wireless router, which is connected to a DSL or cable modem. However, some modems may include wireless capabilities, making the modem itself the access point. Large businesses often provide several access points, which allows employees to wirelessly connect to a central network from a wide range of locations. Public access points can be found in stores, coffee shops, restaurants, libraries, and other locations. Some cities provide public access points in the form of wireless transmitters that are connected to streetlights, signs, and other public objects. While access points typically provide wireless access to the Internet, some are intended only to provide access to a closed network. For example, a business may provide secure access points to its employees so they can wirelessly access files from a network server. Also, most access points provide Wi-Fi access, but it is possible for an access point to refer to a Bluetooth device or other type of wireless connection. However, the purpose of most access points is to provide Internet access to connected users. The term "access point" is often used synonymously with base station, though base stations are technically only Wi-Fi devices. It may also be abbreviated AP or WAP (for wireless access point). However, WAP is not as commonly used as AP since WAP is the standard acronym for Wireless Access Protocol. Updated August 11, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/accesspoint Copy

Accessibility

Accessibility Accessibility refers to the design of products and environments for people with disabilities. Examples include wheelchairs, entryway ramps, hearing aids, and braille signs. In the IT world, accessibility often describes hardware and software designed to help those who experience disabilities. Hardware Accessibility hardware may refer to a custom computer system designed for a specific person or simply an accessory that helps an individual with a computer. Examples of accessibility accessories include keyboards with large letters on the keys, oversized mice and trackballs, and pillow switches that can be activated with only a small amount of force. These and other devices can make it possible for users with disabilities to use computers in ways they would not be able to otherwise. Software Modern operating systems include standard accessibility options that can make them easier to use without the need for specialized hardware. For example, Windows and macOS both provide display modification options, such as magnification and inverting colors, which helps those with difficulty seeing. Text-to-speech can also be turned on to provide audible descriptions of objects and text on the screen. Dictation can be used to perform common tasks with vocal commands. Accessibility options can be found in the following locations for common operating systems: Windows: Settings → Ease of Access macOS: System Preferences → Accessibility iOS: Settings → General → Accessibility Android: Settings→ Accessibility View the related images to see examples of accessibility settings. Updated February 1, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/accessibility Copy

ACID

ACID Stands for "Atomicity, Consistency, Isolation, Durability." The ACID acronym defines four characteristics a database must have to ensure data integrity. Specifically, these qualities apply to database operations that write data to the database. Examples include inserting, updating, and removing records. The four ACID elements are described below: 1. Atomicity Atomicity guarantees each transaction is an "all-or-nothing" event. In other words, it succeeds or fails completely. Atomic operations prevent data corruption by disallowing partial transactions. If an operation cannot be completed, it is "rolled back" to the previous state, as if it never happened. Some database management systems may require a specific configuration to be ACID-compliant. For example, MySQL meets ACID standards, but only when using tables that support atomic operations. InnoDB tables are ACID-compliant since they support transactions, including COMMIT and ROLLBACK statements. MyISAM tables, which do not support transactions, are not ACID-compliant. 2. Consistency Consistency is the assurance that only valid data is written to a database. For example, a database will not accept invalid transactions or unrecognizable data. Additionally, it may use a "doublewrite buffer" that temporarily stores new transactions. If the database or host system crashes unexpectedly, the data can be restored from the buffer. 3. Isolation Isolation ensures each transaction is handled individually. Some databases read and write data several times per second, which may require concurrent transactions. Even when transactions take place at the same time, they can still be isolated from each other. For example, if one operation fails, it will not affect others taking place at the same time. Isolation is also essential for database security since it prevents the data in one transaction from being visible to another. 4. Durability Durability guarantees data will be stored once a transaction has been processed or "committed" to the database. It requires that data is written to non-volatile memory so that transactions are not lost if an application crashes or a power outage occurs. While database software can help ensure database durability, hardware is also important. For example, a RAID storage configuration can provide redundancy if a storage device fails. A UPS battery backup can prevent data loss by maintaining electrical power if the primary power source is unavailable. NOTE: "Acid" (lowercase) is a web browser test that checks browser support for specific HTML tags and CSS rules. The most recent test, Acid3, was produced by the Web Standards Project group in 2008. Updated September 5, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/acid Copy

ACL

ACL Stands for "Access Control List." An ACL is a list of user permissions for a file, folder, or other data object. The entries in an ACL specify which users and groups can access something and what actions they may perform. ACLs are used by operating systems, networking devices, and database management software to help administer access to resources. Multi-user operating systems like Windows, macOS, and Linux use ACLs to manage permissions for files and folders in the file system. These ACLs govern access for both individual users and user groups and determine what actions they may perform. On Unix-based operating systems, these operations include read, write, and execute; ACLs on Windows also support those same operations and add modify and full-control permissions. Changing permissions level on a file in Windows Database management systems use ACLs to control the ability to access and modify the contents of a database. A database ACL manages access to both database objects (like a specific table or view) and to the data itself (in a table's rows and columns). Database administrators can control which users and user groups can add, update, and delete data from a database; they can even hide specific fields from certain users to keep some information secret while still allowing access to less sensitive data. Network devices use their own ACLs to control and filter network traffic. However, instead of controlling access by users and groups, they use other criteria to decide whether to permit traffic to pass. Typical criteria include the source and destination IP address, port number, protocol, or even time of day. These criteria may even be combined to provide very fine control over network traffic. For example, a router may be configured using an ACL to block streaming video overnight while still allowing web browsing traffic to proceed. Updated August 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/acl Copy

Activation Key

Activation Key A software activation key is a string of letters and/or numbers used to register or activate a software application. You may receive an activation key when you purchase a commercial software program. The purpose of an activation key is to prevent software piracy by ensuring only users who have purchased a program can use it. Some software programs will not function without a valid activation key, while other programs will run in "trial mode" or with restricted functionality. Registering or activating the software with the activation key removes these limitations. There is no standard activation key format, but most include both letters and numbers. They often contain groups of characters separated by dashes for readability. Most license keys contain at least 10 characters, and some may have over 30. Below is an example of a typical activation key: FA50F-3B7E6-754C2-B3F12-F279E Developers can implement software activation keys in several different ways. The most basic method is to simply check a key against a database or list of keys. If a user enters an activation key that matches one in the list, the software is successfully activated. A more advanced method is to generate activation keys dynamically based on a user's registration information (such as an email address). An algorithm in the program can then check the key to see if it corresponds with the email address used to register the software program. Modern software programs often use online activation, which verifies the key on a remote server. This allows the developer to limit the number of uses for each key and revoke keys if necessary. Software activation keys may be found inside retail software boxes or order confirmation emails. You can typically open the activation window in a software program by locating the "Activate" or "Register" option in the program. NOTE: An activation key may also be called a product key, software key, license key, registration code, or serial number. Updated March 1, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/activation_key Copy

Active Cell

Active Cell An active cell refers to the currently selected cell in a spreadsheet. It can be identified by a bold (typically blue) outline that surrounds the cell. The standard way to reference the location of an active cell is with a column/row combination, such as A2 (first column, second row) or B5 (second column, fifth row). Whenever you click on a specific cell within a spreadsheet, it becomes the active cell. Once a cell is selected, you can enter values or a function into the cell. Most spreadsheet programs will display the value of the active cell both inside the cell itself and within a long text field in the spreadsheet toolbar. The text field is helpful for viewing or modifying functions and for editing long text strings that don't fit in the active cell. Most spreadsheet applications allow you to define a specific data type for individual cells. Therefore, you can use the cell formatting option in the toolbar or select Format → Cells… from the menu bar to choose the data type for the active cell. For example, if the active cell contains the price of an item, you may want to select "Currency" as the data type. You can also format the appearance of an active cell by selecting the font, text color, background color, and text styles. In most cases, a spreadsheet only has one active cell at a time. However, it is possible to select multiple cells by dragging the cursor over a group of cells. In this case, all of the selected cells may be considered active cells. If you change the cell formatting options while multiple cells are selected, the changes will affect all of the active cells. NOTE: When writing a function in Microsoft Excel, the ActiveCell property can be used to reference the active cell. The value of the active cell can be accessed using ActiveCell.Value. Updated February 7, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/active_cell Copy

Active Directory

Active Directory Active Directory (AD) is a Microsoft technology used to manage computers and other devices on a network. It is a primary feature of Windows Server, an operating system that runs both local and Internet-based servers. Active Directory allows network administrators to create and manage domains, users, and objects within a network. For example, an admin can create a group of users and give them specific access privileges to certain directories on the server. As a network grows, Active Directory provides a way to organize a large number of users into logical groups and subgroups, while providing access control at each level. The Active Directory structure includes three main tiers: 1) domains, 2) trees, and 3) forests. Several objects (users or devices) that all use the same database may be grouped into a single domain. Multiple domains can be combined into a single group called a tree. Multiple trees may be grouped into a collection called a forest. Each one of these levels can be assigned specific access rights and communication privileges. Active Directory provides several different services, which fall under the umbrella of "Active Directory Domain Services," or AD DS. These services include: Domain Services – stores centralized data and manages communication between users and domains; includes login authentication and search functionality Certificate Services – creates, distributes, and manages secure certificates Lightweight Directory Services – supports directory-enabled applications using the open (LDAP) protocol Directory Federation Services – provides single-sign-on (SSO) to authenticate a user in multiple web applications in a single session Rights Management – protects copyrighted information by preventing unauthorized use and distribution of digital content AD DS is included with Windows Server (including Windows Server 10) and is designed to manage client systems. While systems running the regular version of Windows do not have the administrative features of AD DS, they do support Active Directory. This means any Windows computer can connect to a Windows workgroup, provided the user has the correct login credentials. Updated July 13, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/active_directory Copy

Active-Matrix

Active-Matrix Active-matrix is a technology used in LCD displays, such as laptop screens, and flat screen monitors. It uses a matrix of thin film transistors (TFTs) and capacitors to control the image produced by the display. The brightness of each pixel is controlled by modifying the electrical charge of the corresponding capacitors. Each pixel's color is controlled by altering the charge of individual capacitors that emit red, green, and blue (RGB) light. The term "active-matrix" refers to the active nature of the capacitors in the display. Unlike a passive-matrix display, which must charge full rows of wires to alter individual pixels, an active-matrix display can control each pixel directly. This results in a significantly faster response time, meaning the pixels can change state much more rapidly. In practical terms, an active-matrix monitor can display motion and fast-moving images more clearly than a passive-matrix display can. The fast switching of TFTs also prevents the "ghosting" of the cursor that is common on passive-matrix screens. Since active-matrix technology provides individual control of each pixel, active-matrix screens typically exhibit more even brightness and color across the screen than passive-matrix displays. Because of the multiple advantages of active-matrix technology, most modern computer monitors, laptop screens, and LCD televisions use active-matrix screens. Updated March 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/active-matrix Copy

ActiveX

ActiveX ActiveX is a technology introduced by Microsoft in 1996 as part of the OLE framework. It includes a collection of prewritten software components that developers can implement within an application or webpage. This provides a simple way for programmers to add extra functionality to their software or website without needing to write code from scratch. Software add-ons created with ActiveX are called ActiveX controls. These controls can be implemented in all types of programs, but they are most commonly distributed as small Web applications. For example, a basic ActiveX control might display a clock on a webpage. Advanced ActiveX controls can be used for creating stock tickers, interactive presentations, or even Web-based games. ActiveX controls are similar to Java applets, but run through the ActiveX framework rather than the Java Runtime Environment (JRE). This means you must have ActiveX installed on your computer in order to view ActiveX controls in your Web browser. Additionally, when loading a custom ActiveX control within a webpage, you may be prompted to install it. If this happens, you should only accept the download if it is from a trusted source. While ActiveX provides a convenient way for Web developers to add interactive content to their websites, the technology is not supported by all browsers. In fact, ActiveX is only officially supported by Internet Explorer for Windows. Therefore, ActiveX controls are rarely used in today's websites. Instead, most interactive content is published using Flash, JavaScript, or embedded media. Updated April 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/activex Copy

Ad Hoc Network

Ad Hoc Network An ad hoc network is a temporary network created between two devices without utilizing any other networking infrastructure. These networks exist for a single session, often for a specific purpose like transferring files or sharing an Internet connection. Ad hoc networks do not require a router or wireless access point. Instead, the devices themselves route network traffic. Ad hoc networks are often direct peer-to-peer Wi-Fi connections created between two computers or mobile devices. However, these networks are not limited to two devices. For example, a smartphone can also create an ad hoc wireless network to serve as a mobile hotspot to share its Internet connection. Multiple devices can connect to a wireless ad hoc network, though performance may degrade as additional devices are added. "Ad Hoc" is a Latin phrase that means "for this purpose." Wired networks may also be considered ad hoc. A temporary network can be established between two computers using a single Ethernet cable. Two computers with Thunderbolt ports can use a Thunderbolt cable to establish a temporary connection. NOTE: Ad hoc networks generally use a low level of security, so it's best to only use them temporarily. Updated February 6, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ad_hoc_network Copy

Adapter

Adapter An adapter is a device that allows a specific type of hardware to work with another device that would otherwise be incompatible. Examples of adapters include electrical adapters, video adapters, audio adapters, and network adapters. An electrical adapter, for instance, may convert the incoming voltage from 120V to 12V, which is suitable for a radio or other small electronic device. Without regulating voltage through an adapter, the incoming electrical surge could literally fry the internal components of the device. Most consumer electronics have adapters attached to the plug at the end of the electrical cord. Whenever you see an plug surrounded by a large box, it is most likely an electrical adapter. You can typically find the input and output voltage printed directly on the adapter. A device that does not have an adapter on the end of its electrical cable typically has a built-in voltage adapter. For example, desktop computers typically have the adapter built into the internal power supply. Video adapters and audio adapters adapt one type of interface to another type of connector. For example, a DVI to VGA adapter allows you to connect theDVI output of a laptop to the VGA input of a projector. Most professional audio devices use 1/4" audio jacks, while most computers have 1/8" "minijacks" for audio input and output. Therefore, 1/4" to 1/8" audio adapters are often used to import audio into computers. Likewise, an 1/8" to 1/4" adapter can used to output audio from a computer to a professional audio system. Since a large number of audio and video interfaces exist, there are hundreds of audio and video adapters available. Network cards, or NICs, are also called network adapters. These include Ethernet cards, internal Wi-Fi chips, and external wireless transmitters. While these devices don't convert connections like audio or video adapters, they enable computers to connect to network. Since the network card makes it possible to connect to an otherwise incompatible network, the card serves as an adapter. Similarly, video cards are sometimes called video adapters because they convert a video signal to an image that can be displayed on a monitor. Updated July 13, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/adapter Copy

Adaptive Content

Adaptive Content Adaptive content is digital content that is optimized for multiple devices. It may include text, images, video, and other types of multimedia. The content may simply adapt to your screen size or may appear differently depending on the device on which it is accessed. The most popular way to display adaptive content on the web is through responsive web design. By using CSS media queries and fluid layouts, web developers can create websites that adjust to the size of your browser window. When you load a responsive webpage, a media query detects your window size and your browser displays the corresponding layout. Media queries are often used in combination with fluid layouts, which define sections of a page in percentages rather than fixed pixels. This allows the content to fill different screen sizes more evenly. Another way to display adaptive web content is to detect what device a person is using. For example, when you access a website on your mobile phone, it may direct you to a separate mobile site that is designed specifically for smartphones. The layout may include larger text, more simple navigation, and larger buttons to make it easily accessible with a touchscreen. Mobile sites often use unique URLs, such as m.example.com. While websites are the most common example of adaptive content, they are not the only kind. Software, for instance, can be adapted to multiple devices and screen sizes. Many productivity programs, such as Microsoft Office and Apple's iWork applications are now developed as mobile apps alongside the traditional desktop versions. Many games that used to only run on desktop computers are now available for mobile devices as well. Some programs now come in three different versions – for desktops, tablets, and smartphones. Adaptive content extends other mediums as well. For example, a smart home thermostat may have a digital interface that displays temperature and humidity information as well as outside weather data downloaded from the Internet. You might be able to access this data by logging into your account thorough the web or an app. Similarly, the information displayed on your automobile's LCD panel might also be accessible through the web or a mobile app interface. In each case, developers must create adaptive content that displays the information correctly on each device. Updated January 21, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/adaptive_content Copy

ADC

ADC Stands for "Analog-to-Digital Converter." An ADC is a hardware component that converts an analog signal into a digital signal. Computers and other digital devices can only handle digital data, so getting input from an analog device requires processing by an ADC. Many kinds of input go through an ADC, including audio and digital photography. An ADC performs the opposite function of a digital-to-analog converter (DAC). ADCs convert an analog signal into a digital one using two primary steps. First, the ADC samples an analog signal at regular intervals and measures its amplitude. How often the ADC makes samples is called a sample rate. Next, it quantizes each sample to a specific value within a numerical range, rounding the measurement to the nearest numerical value. The number of bits used to store each sample's value determines the ADC's resolution. For example, an 8-bit ADC represents an analog signal using 256 discrete values (2^8). The overall quality and accuracy of the resulting digital signal typically improves as the sample rate and resolution increase. Several common computer workflows use ADCs to convert analog data into digital data. Every computer sound card uses an ADC to convert the audio signal coming in through its microphone and line-in ports. Video input devices like a converter box or tuner card convert an analog video signal into digital video. Digital camera sensors convert the color and intensity of the light that hits each pixel into digital data. Finally, LCD monitors and televisions that support VGA or composite video connections use an ADC to convert the analog video signal into digital data that controls the color of each pixel on the screen. Updated December 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/adc Copy

Add-on

Add-on An add-on is a software extension that adds extra features to a program. It may extend certain functions within the program, add new items to the program's interface, or give the program additional capabilities. For example, Mozilla Firefox, a popular Web browser, supports add-ons such as the Google toolbar, ad blockers, and Web developer tools. Some computer games support add-ons that provide extra maps, new characters, or game-editing capabilities. Most add-ons are available as self-installing packages. This means the user can simply double-click the add-on package to install the files for the corresponding program. Other add-ons may require the user to manually move files into specific directories. While not all programs support add-ons, many programs are now developed with add-on support, since it provides a simple way for other developers to extend the functions of the program. However, not all software programs refer to these extra features as "add-ons." For example, Dreamweaver supports "extensions," which add extra Web development features, while Excel can import "Add-Ins" that provide the user with extra spreadsheet tools. Many programs also support plug-ins, which may be considered a type of add-on. Updated December 22, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/addon Copy

Address Bar

An address bar is a text field near the top of a Web browser window that displays the URL of the current webpage. The URL, or web address, reflects the address of the current page and automatically changes whenever you visit a new webpage. Therefore, you can always check the location of the webpage you are currently viewing with the browser's address bar. While the URL in the address bar updates automatically when you visit a new page, you can also manually enter a web address. Therefore, if you know the URL of a website or specific page you want to visit, you can type the URL in the address bar and press Enter to open the location in your browser. NOTE: The URL typically begins with "http://", but most browsers will automatically add the HTTP prefix to the beginning of the address if you don't type it in. The appearance of the address bar varies slightly between browsers, but most browsers display a small 16x16 pixel icon directly to the left of the URL. This icon is called a "favicon" and provides a visual identifier for the current website. Some browsers also display an RSS feed button on the right side of the address bar when you visit a website that offers RSS feeds. In the Safari web browser, the address bar also doubles as a progress bar when pages are loading and includes a refresh button on the right side. Firefox includes a favorites icon on the right side of the address bar that lets you add or edit a bookmark for the current page. The address bar is sometimes also called an "address field." However, it should not be confused with a browser toolbar, such as the Google or Yahoo! Toolbar. These toolbars typically appear underneath the address bar and may include a search field and several icons.

ADF

ADF Stands for "Automatic Document Feeder." An ADF is a device that automatically feeds pages into a copier or scanner. It allows a stack of pages to be scanned or copied without user intervention. An ADF is a useful tool when scanning a long document for OCR input. Automatic document feeders are a common feature on large, multifunction printer/scanner/copier machines meant for office use. Small standalone document scanners also often include an ADF. The mechanism for automatically feeding documents differs between models, but most can handle up to 50 printed pages. ADF scanners differ from flatbed scanners, which require each page to be placed manually on the scanning surface. Some large multifunction devices include both an automatic document feeder and a flatbed scanner, allowing users to choose whichever scanner works best for their document. Updated November 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/adf Copy

ADSL

ADSL Stands for "Asymmetric Digital Subscriber Line." ADSL is a type of DSL Internet connection. Like SDSL, it transfers digital data over copper telephone lines. The word "asymmetric" means that the upload and download bandwidth are different; the download speed offered to the user is higher than the upload speed. An ADSL Internet connection is asymmetrical because a copper telephone line only offers a limited amount of bandwidth. Upload and download traffic occupy different parts of the frequency spectrum transmitted over the lines, with the ISP determining the part of the spectrum allocated to each. Since most home Internet users download a lot more data than they upload, ISPs allocate more of that spectrum to download speeds than to upload speeds. An SDSL line, on the other hand, splits the available spectrum evenly. This provides a symmetrical connection with equal upload and download speeds, but this also means that the maximum download speed of an SDSL line is lower than that of an ADSL line. Updated November 14, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/adsl Copy

Advertising ID

Advertising ID An advertising ID is a unique identifier string assigned to a device by its operating system. It is similar to a web cookie, but instead of tracking a person's activity across websites, it helps track it across apps installed on their device. Advertisers can use the activity linked to an advertising ID to create profiles, which they then use to deliver targeted ads within ad-supported apps. Users can reset their device's advertising ID to start over with a new identity or disable them entirely to replace targeted ads with non-personalized ones. Android, iOS, and Windows platforms all use advertising IDs to help developers and advertisers track users' behavior, including which apps they have installed and how they use them. By analyzing usage patterns, advertisers can build a profile for each advertising ID based on the user's interests and demographics. With this data, advertisers can show ads that are relevant to the user and more likely to get engagement. For example, an outdoorsy person may see an ad for a kayak, whereas a music fan may see a new set of headphones instead. The option to enable or disable the use of Advertising IDs in the Windows settings app Each platform uses a different name for them — Android calls them Google Advertising IDs (GAIDs), iOS uses Identifier for Advertisers (IDFA) strings, and Windows calls them Advertising IDs — but they all perform the same function. Android and iOS advertising IDs are randomly-generated UUID strings consisting of 32 hexadecimal digits, while Windows generates a slightly longer 33-character string. While the data linked to advertising IDs should not be specific enough to identify a specific person, many people are still concerned about the privacy implications of extensive tracking. To give their users more control over their information, these platforms all provide ways to reset a device's advertising IDs at any time or opt out of ad personalization entirely. Updated November 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/advertising_id Copy

Adware

Adware Adware is free software that is supported by advertisements. Instead of generating revenue through sales, an adware developer sells online ads that appear while the app runs. The developer may also sell an upgraded ad-free version of their software to users who want to use the app without ads. Smartphone and tablet applications frequently use an adware model. Apps may include a small banner ad while you use them to provide some revenue for the developer. Some apps, particularly mobile games, may require that you watch a short video ad between uses or game levels. An in-app purchase is often available to remove ads for a one-time purchase or recurring subscription. Most adware is safe to use, but some may inject ads into other parts of the operating system's user interface, serving as a form of malware. These ads may appear as popups or as extra ads injected into webpages. Ads introduced by malware may be difficult to remove, appearing even after the adware that introduced them is uninstalled. You may need to use antivirus and antimalware software to completely remove malicious ads. Updated March 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/adware Copy

Affiliate

Affiliate An Internet affiliate is a company, organization, or individual that markets another company's products through their website. In exchange for marketing their products, companies pay affiliates a commission for each sale they generate. Affiliate programs exist for many different industries, such as travel, clothing, technology, and online services. This allows web publishers to promote specific products or services related to the content of their websites. For example, the webmaster of a fashion website may publish affiliate banners for a clothing store. The owner of a software review website may include affiliate links to different software programs. Affiliate marketing is a type of PPS advertising, since affiliates are only paid for sales they produce (unlike PPC advertising). Therefore, merchants must offer affiliates high enough commissions to make it worthwhile for the publishers to run their ads. Affiliate commissions vary widely between industries and also depend on average sale amounts. Low-margin products, such as consumer electronics, may offer commissions as low as 2%, while high-margin products, such as computer software, may offer commissions as high as 75%. Most affiliate commissions fall in the range of 5 to 20%. Affiliate programs provide free marketing for merchants and an extra source of revenue for web publishers. While it is a win-win partnership, setting up an affiliate system to track sales and generate payments is a complex process. Therefore, many companies run their affiliate programs through a third party e-commerce platform, such as Commission Junction or DirectTrack. Updated May 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/affiliate Copy

AFP

AFP Stands for "Apple Filing Protocol." AFP is a protocol developed by Apple for sharing files over a network. It was used in early versions of Apple's Macintosh operating systems and is also supported in macOS 10. However, AFP has mostly been replaced by the standard SMB (Server Message Block) protocol. Apple introduced AFP (originally the "AppleTalk Filing Protocol") with Macintosh System 6 in the late 1980s. It was part of the AppleTalk networking suite, which allowed Macs to communicate over a local network. AFP was also the foundational protocol used by AppleShare, a file server application. For several decades, AFP was the primary protocol used to share files between Macs. You can connect to an AFP server from the macOS Finder by selecting Go → Connect to Server... Then choose or manually type in the server's address beginning with afp:// In 2001, Apple released Mac OS X 10.0 and updated AFP to version 3.0 with significant changes. AFP 3 was the first version to support Unix privileges, files over 2 gigabytes, and UTF-8 filenames. The last update to the protocol was AFP 3.4, introduced with Mac OS 10.8 "Mountain Lion." Later versions of macOS have prioritized SMB as the primary protocol for file sharing. NOTE: While modern versions of macOS and macOS Server still support AFP, the protocol does not work with storage devices formatted using the APFS file system. Updated February 11, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/afp Copy

AGP

AGP Stands for "Accelerated Graphics Port." AGP is a type of expansion slot designed specifically for graphics cards. It was developed in 1996 as an alternative to the PCI standard. Since the AGP interface provides a dedicated bus for graphics data, AGP cards are able to render graphics faster than comparable PCI graphics cards. Like PCI slots, AGP slots are built into a computer's motherboard. They have a similar form factor to PCI slots, but can only be used for graphics cards. Additionally, several AGP specifications exist, including AGP 1.0, 2.0, and 3.0, which each use a different voltage. Therefore, AGP cards must be compatible with the specification of the AGP slot they are installed in. Since AGP cards require an expansion slot, they can only be used in desktop computers. While AGP was popular for about a decade, the technology has been superseded by PCI Express, which was introduced in 2004. For a few years, many desktop computers included both AGP and PCI Express slots, but eventually AGP slots were removed completely. Therefore, most desktop computers manufactured after 2006 do not include an AGP slot. Updated November 20, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/agp Copy

AIFF

AIFF Stands for "Audio Interchange File Format." AIFF is a file format designed to store audio data. It was developed by Apple Computer, but is based on Electronic Arts' IFF (Interchange File Format), a container format originally used on Amiga systems. A standard AIFF file contains 2 channels of uncompressed stereo audio with a sample size of 16 bits, recorded at a sampling rate of 44.1 kilohertz. This is also known as "CD-quality audio," since CDs use the same audio specifications. AIFF audio takes up just over 10MB per minute of audio, which means a 4 minute song saved as an AIFF will require just over 40MB of disk space. This is nearly identical to a .WAV file (which uses the same sample size and sampling rate as an AIFF file. However, it is about ten times the size of a similar MP3 file recorded at 128 kbps, or five times the size of an MP3 file recorded at 256 kbps. Since compressed and uncompressed audio files sound nearly the same, most digital audio distributed over the Internet is saved in a compressed format, such as an .MP3 or .M4A file. This makes downloading audio from websites or the iTunes Store much faster and more efficient. However, AIFF files are still commonly used for audio recording, since it is important to save the original audio data in an uncompressed format. By working with uncompressed AIFF files, audio engineers can ensure that the sound quality is maintained throughout the mixing and mastering process. Once the final version of a song or other audio project is saved, it can then be exported in a compressed format. NOTE: While the standard AIFF format does not support compressed audio data, Apple developed a variation of the AIFF format, called AIFF-C, which supports audio compression. This format is also based on the original IFF format, but includes extra space in the file structure to define the type of the compression. Therefore, the AIFF-C format can store audio generated from multiple compression algorithms. File extensions: .AIF, .AIFF, .AIFC Updated December 30, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/aiff Copy

AIX

AIX Stands for "Advanced Interactive eXecutive." AIX is a proprietary UNIX-based operating system developed and sold by IBM. It was first released in 1986 for use on both workstations and servers, with later versions focused on the needs of enterprise servers. It runs on IBM's Power series of server computers, which are popular in finance, healthcare, and many other industries. AIX is well-regarded for its stability and robust set of security features. It includes a Live Update feature that allows the kernel to receive updates without rebooting the system or interrupting any services, drastically reducing downtime. AIX is also optimized for resource management, which helps it maintain stability even while the system is under high load. It includes a broad set of security features, including support for encrypted file systems and role-based access control. It incorporates two features, Secure Boot and Trusted Execution, that verify the digital signatures of the system's firmware, scripts, executables, and kernel extensions to prevent malware intrusions. Since it is now designed specifically for IBM's Power servers, AIX only runs on IBM's POWER series of RISC processors. It integrates with many types of server software to run enterprise software applications, databases, web servers, and development servers. AIX can also run Linux applications that have been recompiled for the platform. Updated July 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/aix Copy

Ajax

Ajax is a combination of Web development technologies used for creating dynamic websites. While the term "Ajax" is not written in all caps like most tech acronyms, the letters stand for "Asynchronous JavaScript And XML." Therefore, websites that use Ajax combine JavaScript and XML to display dynamic content. The "asynchronous" part of Ajax refers to the way requests are made to the Web server. When a script sends a request to the Web server, it may receive data, which can then be displayed on the Web page. Since these events happen at slightly different times, they are considered to be asynchronous. Most Ajax implementations use the XMLHttpRequest API, which includes a list of server requests that can be called within JavaScript code. The data is usually sent back to the browser in an XML format, since it is easy to parse. However, it is possible for the server to send data as unformatted plain text as well. What makes Ajax so powerful is that scripts can run on the client side, rather than on the server. This means a JavaScript function can make a request to a server after a webpage has already finished loading. The data received from the server can then be displayed on the page without reloading the other content. If a server-side scripting language like PHP or ASP was used, the entire page would need to be reloaded in order for the new content to be displayed. While you may not realize it, you have probably seen Ajax at work on several different websites. For example, search engines that provide a list of search suggestions as you type are most likely using Ajax to display the suggestions. Image searches that produce more thumbnails as you scroll through the results typically use Ajax to retrieve the continual list of images. When you click "Older Posts" at the bottom of a Facebook page, Ajax is used to display additional postings. Ajax has helped make the Web more dynamic by enabling webpages to retrieve and load new content without needing to reload the rest of the page. By using Ajax, Web developers can create interactive websites that use resources efficiently and provide visitors with a responsive interface.

Alert Box

Alert Box An alert box is a pop-up window that notifies users about important events. It typically appears to provide a warning, request confirmation, or convey critical information. For example, when you attempt to delete files from the Trash or Recycle Bin, an alert box may appear with a message saying, "Are you sure you want to permanently delete these items?" You can click OK to proceed or Cancel to stop the action. Similarly, when you reset a device or program to its default settings, you'll most likely see an alert box that asks you to confirm the action. Alert boxes often act as safeguards to prevent unintended actions. For example, when you try to close an unsaved document, an alert box may ask, "Do you want to save changes before closing?" You can then choose an option such as Save, Don't Save, and Cancel. While many alert boxes provide multiple options, some only offer a single acknowledgment button, like OK. For instance, if a program crashes unexpectedly, an alert box might appear stating, "The application has encountered an error and will close." Updated December 7, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/alert_box Copy

Algorithm

Algorithm An algorithm is a set of instructions designed to perform a specific task. This can be a simple process, such as multiplying two numbers, or a complex operation, such as playing a compressed video file. Search engines use proprietary algorithms to display the most relevant results from their search index for specific queries. In computer programming, algorithms are often created as functions. These functions serve as small programs that can be referenced by a larger program. For example, an image viewing application may include a library of functions that each use a custom algorithm to render different image file formats. An image editing program may contain algorithms designed to process image data. Examples of image processing algorithms include cropping, resizing, sharpening, blurring, red-eye reduction, and color enhancement. In many cases, there are multiple ways to perform a specific operation within a software program. Therefore, programmers usually seek to create the most efficient algorithms possible. By using highly-efficient algorithms, developers can ensure their programs run as fast as possible and use minimal system resources. Of course, not all algorithms are created perfectly the first time. Therefore, developers often improve existing algorithms and include them in future software updates. When you see a new version of a software program that has been "optimized" or has "faster performance," it most means the new version includes more efficient algorithms. Updated August 2, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/algorithm Copy

Alpha Software

Alpha Software Alpha software is computer software that is still in the early testing phase. It is functional enough to be used, but is unpolished and often lacks many of the features that will be included in the final version of the program. The "alpha phase" of software development follows the early programming and design stages, but precedes the "beta phase" in which the software closely resembles the final version. Since the alpha phase is an early part of the software development cycle, alpha software typically includes significant bugs and usability issues. Therefore, while beta software may be provided to the public, alpha software is only tested internally. The alpha stage is also important for competitive reasons, as the developer may not want to disclose the new features of a software program until shortly before the release date. If a developer is building a small application, he may be the only person who ever tests the alpha version. Larger programs, however, are often tested internally by a team of developers during the alpha phase. In some cases, multiple teams may work together on the alpha version of a software program. Once the programmers have built a working version with all the necessary features, the lead developer may decide to implement a "feature freeze," which means no additional features are planned for the current version of the program. This often signals the end of the alpha phase and the beginning of the beta stage of development. Updated April 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/alpha_software Copy

ALU

ALU Stands for "Arithmetic Logic Unit." An ALU is an integrated circuit within a CPU or GPU that performs arithmetic and logic operations. Arithmetic instructions include addition, subtraction, and shifting operations, while logic instructions include boolean comparisons, such as AND, OR, XOR, and NOT operations. ALUs are designed to perform integer calculations. Therefore, besides adding and subtracting numbers, ALUs often handle the multiplication of two integers, since the result is also an integer. However, ALUs typically do not perform division operations, since the result may be a fraction, or a "floating point" number. Instead, division operations are usually handled by the floating-point unit (FPU), which also performs other non-integer calculations. While the ALU is a fundamental component of all processors, the design and function of an ALU may vary between different processor models. For example, some ALUs only perform integer calculations, while others are designed to handle floating point operations as well. Some processors contain a single ALU, while others include several arithmetic logic units that work together to perform calculations. Regardless of the way an ALU is designed, its primary job is to handle integer operations. Therefore, a computer's integer performance is tied directly to the processing speed of the ALU. Updated March 24, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/alu Copy

Analog

Analog Analog is an adjective that describes a continuous measurement or transmission of a signal. It is often contrasted with digital, which is how computers store and process data using ones and zeros. While computers are digital devices, human beings are analog. Everything we perceive, such as what we see and hear, is a continuous transmission of information to our senses. This continuous stream of input is not estimated, but received and processed by our brains as analog data. Analog vs Digital Equipment Though we live in an era called the "digital age," many analog devices still exist. For example, a turntable (or record player) is an analog device, since the needle continuously reads the bumps and grooves on a record. Conversely, a CD player is digital since it reads binary data that represents an audio signal. A thermometer that displays the temperature using a mercury level or needle is considered analog, while one that displays the temperature on a screen is digital. Digital audio/video equipment and software programs approximate audio or video signals using a process called sampling. While they may "sample" an analog signal several thousand times per second, the digital version is still an estimation. The analog signal has an infinite number of data points and therefore is more accurate than a digital one. While analog data has its advantages, digital data is far easier to manage and edit. Since computers only process digital data, most information today is stored digitally. Digital data is also easier to duplicate and preserve. An exact copy of a CD or hard drive can be created over and over again, while copying an analog tape will eventually degrade the quality. Converting between Analog and Digital It is possible to convert analog data to digital and vice versa. For example, you can use an analog-to-digital converter (ADC), to convert analog VHS tapes into digital movies that can be played on your PC. Similarly digital audio data can be converted to an analog signal that can be played through a speaker. This is done using a digital-to-analog converter (DAC), which may exist in either the playback device (such as a computer) or the speaker. NOTE: If a speaker receives a digital input (such as a USB or Toslink connection), it must use a built-in DAC to convert the signal to analog. If it receives an analog input, such as a 1/8" audio cable, RCA jack, or speaker wire, the analog signal can be output directly through the speaker. Updated September 12, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/analog Copy

Android

Android Android is a mobile operating system developed by Google. It is used by several smartphones and tablets. Examples include the Sony Xperia, the Samsung Galaxy, and the Google Nexus One. The Android operating system (OS) is based on the Linux kernel. Unlike Apple's iOS, Android is open source, meaning developers can modify and customize the OS for each phone. Therefore, different Android-based phones often have different graphical user interfaces GUIs even though they use the same OS. Android phones typically come with several built-in applications and also support third-party programs. Developers can create programs for Android using the free Android software developer kit (SDK). Android programs are written in Java and run through a Java virtual machine JVM that is optimized for mobile devices. The "Dalvik" JVM was used through Android 4.4 and was replaced by Android Runtime or "ART" in Android 5.0. Users can download and install Android apps from Google Play and other locations. If you are unsure what operating system your phone or tablet uses, you can view the system information by selecting "About" in the Settings menu. This is also a good way to check if your device meets an app's system requirements. The name "Android" comes from the term android, which is robot designed to look and act like a human. Updated May 16, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/android Copy

Animated GIF

Animated GIF An animated GIF is a GIF file that includes multiple images or "frames." These frames are played back in sequence when the file is opened or displayed in a web browser. The result is an animated clip or a short movie. The GIF file format includes a Graphics Control Extension (or "GCE block"), which enables a single GIF file to store multiple frames. This section also specifies the delay between frames, which can be used to set the frame rate or insert pauses at certain points within the animation. Another section, called the Netscape Application Block (NAB), specifies how many times the animation will repeat (a setting of "0" is used for infinite repetitions). In the early years of the Web, animated GIFs were a popular way to display motion and liven up websites. They were commonly used for advertisements, such as banners and leaderboards. As Flash animations became more popular, animated GIFs became less prominent. However, animated GIFs have recently seen a resurgence on the web since they are supported by all platforms. For example, Apple's iOS does not support Flash animations, but can display animated GIFs. Several image editing programs, such as Adobe Photoshop and GIMP, can be used to create animated GIFs. Other graphics programs can merge multiple image files into a single GIF. Some video utilities can even convert short videos to animated GIFs. While this can be useful for sharing small videos on the web, the GIF format is not as efficient as the MPEG format for storing videos longer than a few seconds. Updated October 17, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/animated_gif Copy

ANR

ANR Stands for "Application Not Responding." ANR is an abbreviation that describes an unresponsive Android app. When an app is running on an Android device and stops responding, an "ANR" event is triggered. Two conditions may cause an ANR error on an Android device: An active app does not respond to an input event within 5 seconds. The BroadcastReceiver class does not finish executing after a long period of time. If an ANR error happens on your Android device, a dialog box will appear on the screen. The message will inform you the application is not responding and will ask if you want to close the app. You have two options: Wait or OK. Choosing "Wait" will allow you to keep waiting if you want to give the app more time. Choosing "OK" will close the app and you may lose unsaved activity. ANRs are different than crashes. A crash causes a program to quit unexpectedly. An ANR causes a program to "hang" in an unresponsive state for a few seconds, but it may recover. ANR errors happen for many different reasons. Some are developer-related, such as a poorly written function that loops more times than necessary. Others are device-related, meaning the hardware cannot keep up with the demands of the app. For example, if an app is rendering a large document, it may take several seconds to load the data and render the image on the screen. This could produce an ANR message, though the process might complete a few seconds later. Developers Because ANRs create a poor user experience, developers aim to avoid them or at least reduce the number of occurrences as much as possible. The Android operating system records ANRs and the corresponding activities to help developers debug their apps. If an app is distributed via Google Play, the ANR data is automatically sent to Google. Developers can review the ANR data in the Android Vitals section of the Developer Console. NOTE: No personal data is transmitted with ANR data. Only the app version, Android version, device type, and activity data (such as the current process) are recorded. Updated September 11, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/anr Copy

ANSI

ANSI Stands for "American National Standards Institute." ANSI is a U.S.-based non-profit organization that works to develop and promote standards in the United States and around the world. By standardizing new products and technologies, ANSI both strengthens the United States' position in the global marketplace and helps ensure product integrity and safety. ANSI was originally called the "American Engineering Standards Committee" (AESC), which was formed in 1918. The AESC worked with the American Institute of Electrical Engineers (now the IEEE) and several other organizations to develop engineering standards. In 1928, AESC was reorganized and renamed the "American Standards Association" (ASA). The ASA began to develop partnerships with global organizations, such as the ISO and helped promote U.S. standards internationally. In 1969, the ASA was renamed to ANSI. For the past several decades, ANSI has continued to promote both national and international standards. By standardizing new technologies, ANSI helps both corporations and government agencies create compatible products and services. For example, when ANSI standardizes a specific type of hardware port, computer manufacturers can build machines with the standardized port and know it will be compatible with third-party devices. When ANSI standardizes a file format, software developers can support the format in their programs, since information about the format is publicly available. When a new standard is accredited by ANSI, it means that the standard has met the organization's requirements for openness, balance, and consensus. In other words, only standards that pass a due process of rigorous approval guidelines become accredited standards. This ensures that all standards accredited by ANSI are worthwhile for manufacturers and consumers alike. Additional information about ANSI can be found at the official ANSI website. Updated January 3, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ansi Copy

Anti-Aliasing

Anti-Aliasing Image anti-aliasing is the smoothing of edges and colors in digital images and fonts. It makes edges appear less jagged and helps blend colors in a natural-looking way. To understand anti-aliasing, it important to first understand what aliasing is. Image aliasing is the "jagged edge" effect caused by mapping curved and diagonal shapes onto square pixels. Horizontal and vertical lines can be mapped perfectly onto square pixels, but angled and curved lines must be estimated. If pixels along the edge are either on or off, the result is a jagged edge, also called "stair-stepping" or aliasing. Anti-aliasing smoothes edges by estimating the colors along each edge. Instead of pixels being on or off, they are somewhere in between. For example, a black diagonal line against a white background might be shades of light and dark grey instead of black and white. A blue circle on a yellow background might have shades of blue, yellow, and green along the edges instead of solid blue and yellow. The goal of an anti-aliasing algorithm is to make a digital image look natural when viewed from a typical viewing distance. When zoomed in, anti-aliased images and text may appear fuzzy because of the estimated pixels. Aliasing is most noticeable on screens with a low DPI (dots per inch). Modern HiDPI screens have a denser grid of pixels that can represent edges more accurately. However, even HiDPI screens still benefit from anti-aliasing. NOTE: Anti-aliasing in digital audio is the removal of unwanted frequencies from sampled audio. Updated September 6, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/anti-aliasing Copy

Antivirus

Antivirus Antivirus software is a type of utility used for scanning and removing viruses from your computer. While many types of antivirus (or "anti-virus") programs exist, their primary purpose is to protect computers from viruses and remove any viruses that are found. Most antivirus programs include both automatic and manual scanning capabilities. The automatic scan may check files that are downloaded from the Internet, discs that are inserted into the computer, and files that are created by software installers. The automatic scan may also scan the entire hard drive on a regular basis. The manual scan option allows you to scan individual files or your entire system whenever you feel it is necessary. Since new viruses are constantly being created by computer hackers, antivirus programs must keep an updated database of virus types. This database includes a list of "virus definitions" that the antivirus software references when scanning files. Since new viruses are frequently distributed, it is important to keep your software's virus database up-to-date. Fortunately, most antivirus programs automatically update the virus database on a regular basis. While antivirus software is primarily designed to protect computers against viruses, many antivirus programs now protect against other types of malware, such as spyware, adware, and rootkits as well. Antivirus software may also be bundled with firewall features, which helps prevent unauthorized access to your computer. Utilities that include both antivirus and firewall capabilities are typically branded "Internet Security" software or something similar. While antivirus programs are available for Windows, Macintosh, and Unix platforms, most antivirus software is sold for Windows systems. This is because most viruses are targeted towards Windows computers and therefore virus protection is especially important for Windows users. If you are a Windows user, it is smart to have at least one antivirus program installed on your computer. Examples of common antivirus programs include Norton Antivirus, Kaspersky Anti-Virus, and ZoneAlarm Antivirus. Updated November 23, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/antivirus Copy

APA

APA

Apache

Apache Apache is the most popular Web server software. It enables a computer to host one or more websites that can be accessed over the Internet using a Web browser. The first version of Apache was released in 1995 by the Apache Group. In 1999, the Apache Group became the Apache Software Foundation, a non-profit organization that currently maintains the development of the Apache Web server software. Apache's popularity in the Web hosting market is largely because it is open source and free to use. Therefore, Web hosting companies can offer Apache-based Web hosting solutions at minimal costs. Other server software, such as Windows Server, requires a commercial license. Apache also supports multiple platforms, including Linux, Windows, and Macintosh operating systems. Since many Linux distributions are also open-source, the Linux/Apache combination has become the most popular Web hosting configuration. Apache can host static websites, as well as dynamic websites that use server-side scripting languages, such as PHP, Python, or Perl. Support for these and other languages is implemented through modules, or installation packages that are added to the standard Apache installation. Apache also supports other modules, which offer advanced security options, file management tools, and other features. Most Apache installations include a URL rewriting module called "mod_rewrite," which has become a common way for webmasters to create custom URLs. While the Apache Web server software is commonly referred to as just "Apache," it is technically called "Apache HTTP Server," since the software serves webpages over the HTTP protocol. When Apache is running, its process name is "httpd," which is short for "HTTP daemon." Updated January 7, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apache Copy

APFS

APFS Stands for "Apple File System." APFS is a file system developed by Apple specifically for flash memory storage devices. It was released with iOS 10.3 in March, 2017 and for macOS 10.13 (High Sierra) in September, 2017. The Apple File System is the successor to Apple's previous file system HFS+, which has been used for decades in Apple products. APFS is designed to provide a more efficient means of storing and accessing files on flash storage devices, such as smartphone and SSDs. It supports over 9 quintillion files on a single storage device, compared to the roughly 4.3 billion files supported by HFS+. APFS natively supports full disk encryption, while HFS+ does not. APFS provides a number of significant performance improvements over HFS+. It requires less memory overhead and has lower latency than HFS+. This means reading and writing files is faster, speeding up common operations such as opening documents and browsing large numbers of files. Copying is much more efficient since file clones are created by simply adding a pointer to the original file rather than duplicating the file. This creates copies instantly without requiring additional disk space. When a copied file is modified, the updates are recorded as "deltas" or changes to the file. Like its predecessor, the Apple File System supports TRIM to improve the lifespan of SSDs. It also adds "Space Sharing," which allows multiple APFS volumes to share the same free space on a physical storage device, or "container." This allows APFS-formatted volumes to grow (or shrink) as needed without repartitioning. Instead of journaling (implemented by HFS+), APFS tracks changes using a copy-on-write metadata scheme to record changes to the file system. This prevents file corruption caused by unexpected crashes and reduces the overhead of journaling. NOTE: APFS is automatically used in Macs with flash memory drives running macOS 10.13 High Sierra or later. Updating a flash-based Mac to High Sierra will automatically update the file system from HFS+ to APFS. Macs with HDDs or Fusion Drives will not be updated to APFS. Updated October 2, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apfs Copy

API

API Stands for "Application Programming Interface." An API is a set of commands, functions, protocols, and objects that programmers can use to create software or interact with an external system. It provides developers with standard commands for performing common operations so they do not have to write the code from scratch. APIs are available for both desktop and mobile operating systems. The Windows API, for example, provides developers with user interface controls and elements, such as windows, scroll bars, and dialog boxes. It also provides commands for accessing the file system and performing file operations, such as creating and deleting files. Additionally, the Windows API includes networking commands that can be used to send and receive data over a local network or the Internet. Mobile APIs, such as the iOS API, provide commands for detecting touchscreen input, such as tapping, swiping, and rotating. It also includes common user interface elements, such as a pop-up keyboard, a search bar, and a tab bar, which provides navigation buttons the bottom of the screen. The iOS API also includes predefined functions for interacting with an iOS device's hardware, such as the camera, microphone, or speakers. Operating system APIs are typically integrated into the software development kit for the corresponding program. For example, Apple's Xcode IDE allows developers to drag and drop elements into an application's interface. It also provides a list of available functions and includes syntax highlighting for known elements and commands. While operating system APIs have a robust set of features, other types of APIs are much more basic. For example, a website may provide an API for web developers that allows them to access specific information from the site. A website API may be as simple as a set of XML elements with a few basic commands for retrieving the information. Updated June 20, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/api Copy

APM

APM Stands for "Actions Per Minute." APM is a video game metric that measures how many actions a player performs per minute. For example, if a player averages one action each second, his or her APM is 60. If a player averages 2.5 actions per second, he or she has an APM of 150. APM applies to several different games, but it is most beneficial in real-time strategy (RTS) games. For example, games like StarCraft and Age of Empires require players to multitask in real-time. A player with a high APM will be able to accomplish more in a given time than a player with a lower APM. This provides a significant advantage when competing against players online or in eSports competitions. What is an "Action?" An action is a single mouse click or keystroke entered during gameplay. For example, clicking the cursor on a unit, pressing "A" to attack, then clicking the cursor on the attack location is three actions. Mouse movements are generally not considered actions. A high or "fast" APM in StarCraft 2 is 400 to 500 actions per minute. During peak moments of gameplay, professional players may exceed 800 APM. While APM is an important metric in RTS gaming, it is not the only ability that matters. Other important skills include: Strategy - understanding the game and creating effective methods to win Decision-making - knowing when to defend, when to attack, and how to adapt in the middle of a game Micro - controlling individual units, especially during combat Macro - continuously making units and structures, and using resources efficiently APM is most important in RTS games, but it can apply to other video games as well. For example, it helps to have a high APM in first-person shooter (FPS) games like Call of Duty and Overwatch. Being "fast" is also advantageous in fighting games like Street Fighter and Super Smash Bros. However, accuracy is often more critical than speed in these games, so APM is not as important. NOTE: APM may be inflated by redundant actions, such as repeatedly pressing a key or mouse button. An example is clicking is right-clicking the same location several times in a row when only one click is needed. Some games include an algorithm that calculates "effective APM" or EPM, by removing redundant actions. Updated June 6, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apm Copy

App

App App is short for "application," which is the same thing as a software program. While an app may refer to a program for any hardware platform, it is most often used to describe programs for mobile devices, such as smartphones and tablets. The term "app" was popularized by Apple when the company created the "App Store" in 2008, a year after the first iPhone was released. As the iPhone and App Store grew in popularity, the term "app" became the standard way to refer to mobile applications. Programs for Android and Windows Phone are now called "apps" as well. Unlike applications for traditional PCs (often called "desktop applications"), mobile apps can only be obtained by downloading them from an online app store. Most devices automatically install apps when downloaded, which creates a seamless installation process for the user. Some apps are free, while others must be purchased. However, mobile apps are typically much cheaper than PC applications, and many are available for only 99 cents. In fact, most paid apps are less than $10. Part of the reason mobile apps are cheaper than desktop applications is because they are often less advanced and take less resources to develop. Apps are limited to the capabilities of the mobile operating system (such as iOS or Android) and therefore may not offer as much functionality as a desktop program. For example, a word processor for Android will most likely have significantly less features than a word processing application for Windows. Most apps are designed to be small, fast, and easy-to-use. Unlike desktop applications, apps are intended to be used on-the-go and are developed to advantage of a small touchscreen interface. NOTE: Apple released the Mac App Store in January, 2011, which offers downloadable applications for Mac OS X. In this case, the term "app" refers to desktop applications. Updated September 22, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/app Copy

Apple

Apple Apple is a technology company based in Cupertino, California. It was founded in 1976 by Steve Jobs, Steve Wozniak, and Ronald Wayne. The company makes Macintosh computers, such as the iMac, Mac mini, MacBook, MacBook Pro, and Mac Pro. Apple also manufactures several personal electronic devices, including the iPhone, iPad, iPod, and Apple Watch. While Apple is most known for its hardware devices, it also develops a wide range of software programs. Examples include the macOS operating system, the Safari web browser, and the iWork apps (Pages, Numbers, and Keynote). Apple also develops a few professional media applications, including Final Cut Pro and Logic Pro. In addition to hardware and software, Apple offers several subscription services. Examples include iCloud, Apple Music, Apple TV+, Apple News+, and Apple Fitness+. Apple also has over 500 retail stores worldwide, which offer Apple products and provide Apple product support. NOTE: In 2007, Apple changed the company name from "Apple Computer" to just "Apple," which reflected the broadening product lines. Apple launched the first iPhone on June 29, 2007. Updated April 19, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apple Copy

Apple Silicon

Apple Silicon Apple silicon is a series of Mac processors developed by Apple. For several decades, Apple built Macintosh computers with CPUs from third-party manufacturers, including Motorola, IBM, and Intel. On June 22, 2020, Apple announced it would be transitioning its Mac lineup to "Apple silicon," a proprietary processor technology. Similar to Apple-developed processors found in iPhones and iPads, Apple silicon processors use a System on a Chip (SoC) architecture. The SoC design includes a CPU, GPU, and several application-specific processors. Examples of additional processors include: A media engine for media playback A neural engine for machine learning applications Efficiency cores for low-power computing Most processors within an Apple silicon SoC are multi-core, meaning they have multiple processing cores. The M1, Apple's first "Apple silicon" chip, was released in November 2020 with the MacBook Air and Mac mini. It has 4 performance cores (primary CPU processors), 4 efficiency cores, 8 GPU cores, and a 16-core neural engine. The M1 Pro, released in October 2021 with the MacBook Pro, has 8 performance cores, 2 efficiency cores, 16 GPU cores, and a 16-core neural engine. The M1 Max, released alongside the M1 Pro, has 8 performance cores (primary CPU processors), 2 efficiency cores, 32 GPU cores, and a 16-core neural engine. Future Apple silicon chips are expected to have even more processing cores. Updated February 5, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apple_silicon Copy

Applet

Applet An applet is a small application designed to run within another application. While the term "applet" is sometimes used to describe small programs included with a computer's operating system, it usually refers to Java applets, or small applications written in the Java programming language. Unlike ordinary applications, Java applets cannot be run directly by the operating system. Instead, they must run within the Java Runtime Environment (JRE), or within another program that includes a Java plug-in. If there is no JRE installed, Java applets will not run. Fortunately, Java is freely available for Windows, Mac, and Linux systems, which means you can easily download and install the appropriate JRE for your system. Since Java applets run within the JRE and are not executed by the operating system, they are crossplatform, meaning a single applet can run on Windows, Mac, and Linux systems. While applets can serve as basic desktop applications, they have limited access to system resources and therefore are not ideal for complex programs. However, their small size and crossplatform nature make them suitable for Web-based applications. Examples of applets designed to run in web browsers include calculators, drawing programs, animations, and video games. Web-based applets can run in any browser on any operating system and long as the Java plug-in is installed. During the early years of the Web, Java applets provided a way for webmasters to add interactive features that were not possible with basic HTML. However, in recent years, applets have been slowly replaced by newer technologies such as jQuery and HTML 5. Some browsers, like Google Chrome, no longer support the tag, and others, like Apple Safari, do not even enable Java by default. Since web developers cannot fully rely on Java support from web browsers, applets are no longer a common way to provide interactive content on the Web. Updated January 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/applet Copy

Application

Application An application is a type of software program meant to help a computer user accomplish a task. Some applications come bundled with a computer's operating system, while others are available for download from websites and through App Stores. The full word "application" is typically used for software on desktop and laptop computers; smartphones, tablets, and other non-computer platforms often use the shortened form "app." Applications differ from other software categories like system software, utilities, and malware. Applications are specifically user-facing programs that run in the foreground and help you accomplish what you want to do. For example, if you wanted to create a presentation, you could do so in a presentation-making application like PowerPoint or Keynote. On the other hand, system software and utilities typically work in the background without user input, performing automated system tasks like establishing network connections or responding to incoming SSH requests. Likewise, a virus that silently searches your disks for sensitive data to send to a remote server is not an application but instead an example of a malware program. A selection of applications viewed in the macOS Launchpad Many categories of applications are available to help you with nearly any task. A few of the most common types include: Word processors that let users write documents, including formatting and page layout features. Image editors that can open and edit bitmap and vector images. Web browsers for browsing the internet and viewing webpages. Communication applications for email, text chat, audio/video conferencing, and screen sharing. Media players that can play audio and video files. Video games in countless genres, including puzzle games, action shooters, simulations, and roleplaying adventures. Some applications may come bundled as part of an application suite. Applications in a suite can work together as part of a larger workflow. For example, the Microsoft Office application suite includes separate applications for word processing, spreadsheets, presentations, email, and several other tasks. Adobe Creative Suite, meanwhile, bundles a set of applications for editing raster graphics, vector illustrations, animations, audio, and video. Updated June 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/application Copy

Application Server

Application Server An application server is a server specifically designed to run applications. The "server" includes both the hardware and software that provide an environment for programs to run. Application servers are used for many purposes. Several examples are listed below: running web applications hosting a hypervisor that manages virtual machines distributing and monitoring software updates processing data sent from another server Since the purpose of an application server is to run software programs, the most important hardware specifications are CPU and RAM. On the software side, the operating system is most important, since it determines what software the server can run. Why Use an Application Server? A web server is designed – and often optimized – to serve webpages. Therefore, it may not have the resources to run demanding web applications. An application server provides the processing power and memory to run these applications in real-time. It also provides the environment to run specific applications. For example, a cloud service may need to process data on a Windows machine. A Linux-based server may provide the web interface for the cloud service, but it cannot run Windows applications. Therefore, it may send input data to a Windows-based application server. The application server can process the data, then return the result to the web server, which can output the result in a web browser. Updated November 22, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/application_server Copy

APU

APU An APU is a processor that includes both the CPU and GPU on a single chip. The name “APU” was coined by AMD, which released the first APU in January, 2011. For many years, CPUs handled all non-graphics calculations, while GPUs were only used for graphics operations. As GPU performance increased, hardware manufacturers and software programmers realized GPUs had a lot of unused potential. Therefore, they began to find ways to offload certain system calculations to the GPU. This strategy, called “parallel processing,” enables the GPU to perform calculations alongside the CPU, improving overall performance. The APU takes parallel computing one step further by removing the bus between the CPU and GPU and integrating both units on the same chip. Since the bus is the main bottleneck in parallel processing, an APU is more efficient than a separate CPU and GPU. While this strategy may not make sense for desktop computers with dedicated video cards, it can provide significant performance gains for laptops and other mobile devices that have integrated graphics chips. NOTE: While Intel processors are not called APUs, modern Intel architectures, such as Sandy Bridge and Ivy Bridge are designed with integrated CPUs and GPUs. These chips are sometimes called “hybrid processors,” since they contain both the central processing unit and the graphics processing unit. Updated November 14, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/apu Copy

Archie

Archie Archie is a program that allows you to search for files available on one or more FTP servers. It was commonly used in the early 1990s, but has been replaced by standard web-based search engines and peer-to-peer (P2P) file sharing services. In the early days of the Internet, large files were often available only through FTP servers. In order to download a specific file, users would have to navigate to the appropriate directory and then find the correct file before downloading it. This made it difficult for people to locate files unless they knew exactly where they were stored on the server. Archie made it possible for users to actually search FTP servers rather than browsing through all the directories. While Archie is rarely used today, some websites still offer an Archie search feature. You can often identify an Archie search engine by a URL that begins with "archie" rather than "www." Most Archie search engines allow you to search for filenames based on either substrings or exact matches. You can also specify if a search should be case sensitive or not. Additionally, you can use boolean operators such as AND and OR to search for multiple filenames at once. Updated February 8, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/archie Copy

Architecture

Architecture The word "architecture" typically refers to building design and construction. In the computing world, "architecture" also refers to design, but instead of buildings, it describes the design of computer systems. Computer architecture is a broad topic that includes everything from the relationship between multiple computers (such as a "client-server" model) to specific components inside a computer. The most important type of hardware design is a computer's processor architecture. The design of the processor determines what software can run on the computer and what other hardware components are supported. For example, Intel's x86 processor architecture is the standard architecture used by most PCs. By using this design, computer manufacturers can create machines that include different hardware components, but run the same software. Several years ago, Apple switched from the PowerPC architecture to the x86 architecture to make the Macintosh platform more compatible with Windows PCs. The architecture of the motherboard is also important in determining what hardware and software a computer system will support. The motherboard design is often called the "chipset" and defines what processor models and other components will work with the motherboard. For example, while two motherboards may both support x86 processors, one may only work with newer processor models. A newer chipset may also require faster RAM and a different type of video card than an older model. NOTE: Most modern computers have 64-bit processors and chipsets, while earlier computers used a 32-bit architecture. A computer with a 64-bit chipset supports far more memory than one with a 32-bit chipset and can run software designed specifically for 64-bit processors. Updated April 4, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/architecture Copy

Archive

Archive An archive file is a single file that contains multiple files and/or folders. Archives may use a compression algorithm that reduces the total file size. Some archive formats also include encryption to protect an archive's contents from unauthorized users. Archives are useful for backing up multiple files and folders into a single file. They preserve the metadata for files they contain, like a file's created and modified dates. Consolidating a large number of files into one makes them easier to back up and share. For example, you can compress an entire vacation's worth of photos into a single archive, then send it to family members or a backup service. Since everything is in a single file, you don't have to worry about missing any photos. If the archive format supports compression, it will also reduce the total file size of the archive compared to the individual files. Multiple files compressed into a single archive file Most operating systems include built-in utilities that can create and extract archive files. Windows Explorer and the macOS Finder can create and expand compressed ZIP files. Unix and Linux can create and expand TAR files with or without gzip compression. Third-party archiving utilities like WinRAR and 7-Zip can create other types of archive files using a variety of compression algorithms. Some archive formats, like RAR, even support splitting an archive into several smaller chunks that can be reassembled later. NOTE: Common archive file extensions include .ZIP, .TAR, .7Z, .RAR, .TAR.GZ, and .TGZ. Updated April 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/archive Copy

Archive Bit

Archive Bit An archive bit is a single bit within a file that indicates whether the file has been backed up or not. A value of 0 indicates the current version of the file has been backed up, while a 1 indicates it has not. Various backup utilities use the archive bit to determine which files to back up and which files to skip. The archive bit is a file attribute supported by several operating systems, including Windows and OS/2. It is typically not visible to the user but is part of the file's metadata. Setting the archive bit modifies the file itself, so the feature cannot be used on operating systems such as macOS and Unix. These operating systems use file timestamps to track the last modified dates of files and compare them with the last backup time to determine what files need to be backed up. While the archive bit provides an easy way to label files for backup, it is not always accurate. For example, if more than one backup program is running on a computer, the archive bit may be turned off by one program when it has not been backed up by another one. For this reason, many modern programs use their own backup system, such as a journal of backed up files. Advanced backup systems may also support incremental backups, which back up multiple versions of a file over time. NOTE: Since archive bits are used exclusively for backup purposes, they are sometimes called "backup bits." Updated October 29, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/archive_bit Copy

ARM

ARM ARM is a family of 32- and 64-bit RISC processor architectures designed by Arm Ltd. and licensed to other companies to customize and produce. They are less expensive to manufacture than x86 processors, require less power, and produce less heat, making them ideal for compact devices. While they have historically been less powerful than x86 processors, some high-end ARM designs are now competitive against other desktop computer processors. The ARM processor design is modular, which allows licensees to customize the processor designs to meet their needs. These customizations often include integrating other components like a GPU or cellular modem. This flexibility makes ARM a common system-on-a-chip (SoC) platform. After the licensee certifies the design with Arm (to ensure it still meets compatibility standards), they can manufacture their design themselves or with a manufacturing partner. A Raspberry Pi computer with a Broadcom-made ARM CPU Many companies license ARM architecture for countless specialized purposes, from small embedded devices and smartphones to powerful desktop processors. For example, the Raspberry Pi uses a basic ARM processor design in a low-power computer on a single circuit board the size of a playing card. Meanwhile, Apple licenses only the ARM instruction set to produce their own designs for their A-series smartphone SoCs and M-series desktop processors. Updated March 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/arm Copy

ARP

ARP Stands for "Address Resolution Protocol." Every device on a network has an IP address assigned to it that identifies it as a source and destination for data packets. It also has a MAC address that identifies its physical connection to the network over which it can send and receive data. ARP is a protocol that maps a device's IP address to its MAC address so that other devices know to send it the right data packets. When a device wants to send a data packet over a network, it needs to know which network connection leads to the destination IP address. First, it broadcasts an ARP request packet to every other device on the network asking which device is assigned that address. This packet is ignored by every device except the one with that IP address, which responds with its MAC address. The sending device then knows which physical network interface it should direct the data packet to. A computer broadcasts an ARP request and only the computer with the specified IP address replies After associating an IP address with a MAC address, the sending device saves that information to its ARP cache. The next time it transmits a data packet, it can consult that cache first to see if it already knows which MAC address is associated with that IP address. Since network conditions often change, with devices leaving the network and joining again later with a new IP address, an ARP cache keeps entries for a limited time — typically 10 to 20 minutes. Updated November 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/arp Copy

Array

Array An array is a data structure that contains a group of elements of the same data type and stores them together in contiguous memory locations. Computer programmers use arrays in their programs to organize sets of data in a way that can be easily sorted and searched. They are more efficient at storing data than separate variables and can help a program run faster. Arrays are a very versatile method of storing data in a program. For example, a search engine may use an array to save a list of search results. It can display one element from that array at a time, in sequence, until it reaches either a specified number of results or the final value stored in the array. Since these values are all stored in one large block of memory instead of in separate variables stored in multiple locations, the results are shown quickly and efficiently. An array containing six values and their corresponding index addresses Since an array stores its values in contiguous memory locations, the size of an array is set when created. The data type, like integer or string, is also defined at creation. A program can reference individual values in an array using the array name combined with that value's index address. In most languages, the index starts with 0 and increments from there. However, some languages start the index at 1, while others allow the programmer to choose whether an index starts with 0 or 1. Creating and Using an Array The syntax in C++ for creating an array and storing values in it looks like this: int characterStats[6] = {15, 14, 13, 12, 10, 8}; The int statement sets the data type as integer, and characterStats gives the array a name. The square brackets [] specify that it is an array, while the number inside the brackets determines its length. The values in the curly brackets {}, separated by commas, are the values contained in the array. The syntax for displaying a specific value in an array may look like this: print(characterStats[2]); Since this array's index starts with 0, this statement would output the number 13, the array's third value. Programmers may also use while and for loops with arrays to output multiple values from an array in sequence with a single command. Updated May 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/array Copy

Artifact

Artifact An artifact, or compression artifact, is a small distortion in a digital image, video, or audio file caused by a lossy compression algorithm. Artifacts occur when compression algorithms discard some of the original media data to create a lower-detail approximation of the original file. The inclusion of some artifacts into a media file is the cost of significantly reducing a file's size to make it use less bandwidth when downloading or streaming. Lossy compression algorithms can achieve much greater reductions in file size than lossless algorithms, but they do so by removing data. For example, JPEG compression reduces an image's file size significantly but may introduce several types of artifacts — subtle blockiness, blurred details, and the dulling of bright colors. MPEG video files include similar artifacts, especially in dark areas with lots of movement. A highly-compressed JPEG (right), showing the loss of fine details and blocky artifacts in regions of high contrast You can tune a lossy compression algorithm to balance file size and quality. If you need to compress media further, it is best to restart from the original file and apply a different compression level. Applying a lossy compression algorithm to the same file multiple times will compound artifacts, making them more extreme with each generation. Once you apply lossy compression, the artifacts introduced are permanent; you cannot un-compress a file to restore the details that were removed. It's best to make a compressed copy of the original file, instead of overwriting it, in case you need access to the more detailed version. Updated December 17, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/artifact Copy

Artificial Intelligence

Artificial Intelligence Artificial Intelligence, or AI, is the ability of a computer running special software to act intelligently — perceiving new data, learning, drawing inferences, and solving problems. It allows computers to perform work that previously required human intervention. AI can handle all sorts of tasks, from simple ones like customer service chatbots and video game opponents to more complex ones like recommendation engines, stock market trading bots, and image generators. Artificial intelligence uses algorithms to process data input that a computer receives. These algorithms analyze data to find patterns, then make predictions or classifications by comparing new inputs to those patterns. Many AIs train using machine learning, which provides them with large, curated data sets to help guide them toward the results the developers of the AI want. Increasingly complex AI algorithms require increasingly-powerful computer hardware. In cases where a computer's CPU isn't fast enough for AI tasks, a high-end GPU can help. Originally designed to process computer graphics, GPUs can handle large amounts of data and quickly analyze it in parallel processes, making them very efficient at running AI and machine learning algorithms. Higher-end AI uses specialized FPGA processors designed for the needs of the AI running on it. Uses of AI The ultimate goal of AI development is to use computers to perform tasks that previously required human effort and to find new or more efficient ways to accomplish those tasks. Early AI started small, with routine work, and has slowly increased its capabilities as algorithms improve and computer hardware becomes more powerful. As AI technology improves, it opens up more kinds of tasks to automation. Chatbots attempt to mimic a real person over online text chat. Early chatbots were novelties, but modern chatbots are commonly used in customer service roles, helping people resolve simple problems and saving time for human customer service agents to focus on more difficult issues. Video games use AI to create virtual opponents for players. Most genres use AI to some degree, including action and strategy games, as well as board games like chess and Go. AI-powered speech recognition enables new user interfaces, allowing people to speak to some devices to ask questions and issue commands. Recommendation engines are used by online stores (like Amazon) and streaming media services (like Netflix) to look for trends in a person's purchases or viewing history. Based on those trends, the AI suggests products or media that person may want to purchase or watch next. AI programs can help businesses, government organizations, and financial institutions identify data trends to streamline processes like identifying fraud, improving logistics, and trading stocks. Art generators like DALL-E and Stable Diffusion are trained on large data sets of curated images and artwork and can generate new images in a wide range of styles based on text prompts. Updated January 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/artificial_intelligence Copy

ASCII

ASCII Stands for "American Standard Code for Information Interchange." ASCII is a character encoding that uses numeric codes to represent characters. These include upper and lowercase English letters, numbers, and punctuation symbols. Standard ASCII Standard ASCII can represent 128 characters. It uses 7 bits to represent each character since the first bit of the byte is always 0. For instance, a capital "T" is represented by 84, or 01010100 in binary. A lowercase "t" is represented by 116 or 01110100 in binary. Other keyboard keys are also mapped to standard ASCII values. For example, the Escape key (ESC) is 27 in ASCII and the Delete key (DEL) is 127. ASCII codes may also be displayed as hexadecimal values instead of the decimal numbers (0 to 127) listed above. For example, the ASCII value of the Escape key (27) in hexadecimal is 1B. The hexadecimal value of the Delete key (127) is 7F. Extended ASCII The 128 (27) characters supported by standard ASCII are enough to represent all standard English letters, numbers, and punctuation symbols. However, it is not sufficient to represent all special characters and characters from other languages. Extended ASCII helps solve this problem by adding an extra 128 values, for a total of 256 (28) characters. The additional binary values start with a 1 instead of a 0. For example, in extended ASCII, the character "é" is represented by 233, or 11101001 in binary. The capital letter "Ö" is represented by 214, or 11010110 in binary. While extended ASCII doubles the character set of standard ASCII, it does not include nearly enough characters to support all languages. Some Asian languages, for example, require thousands of characters. Therefore, other character encodings, such as Latin-1 (ISO-8859-1) and UTF-8 are now more commonly used than ASCII for documents and webpages. UTF-8 supports over one million characters. Updated April 27, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ascii Copy

ASO

ASO Stands for "App Store Optimization." ASO is the process of optimizing the listing of an app in an online app store, such as Apple's App Store, the Microsoft Store, or Google Play. It is similar to search engine optimization (SEO) but is targeted towards apps rather than websites. The goal of ASO is to improve an app's visibility within an app store. One of the top ways an app can gain prominence is to rank highly in the search results for specific keywords. For example, the developer of a racing game might aim to have the app appear in the top results for searches like "racing," "cars," and "racing game." The developer of a photo editing app might want the app to rank highly for searches such as "photos," "photo editor," and "image adjustment." While ASO might seem straightforward, there are often hundreds or even thousands of apps competing for the same top rankings. Additionally, each app store uses a different alogorithm, similar to a web search engines). The algorithms are frequently updated, which makes ASO an ongoing process. Basic app store optimization strategies include: Choosing a descriptive app name or title Publishing an accurate and helpful app description in the app store Selecting appropriate keywords for the app store listing Providing useful and attractive screenshots Advanced ASO strategies that may require more time to implement include: Building a large user base, increasing the popularity of the app Generating a large number of positive ratings and reviews Having other websites or apps link to the app listing Ensuring the app is up-to-date and stable. Some app stores allow developers to advertise their apps within the store. Similar to paid search advertising, this provides developers with an alternative way to improve their app visibility by paying for a top search result. Paid placement is not "organic ASO," but is commonly used to boost app downloads, especially when a new app is launched. NOTE: ASO may also aim to improve an app's ranking in the Browse section of an app store. For example, if an app is listed as featured or trending, it may significantly increase the app's visibility and number of downloads. Updated August 22, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/aso Copy

ASP

ASP ASP has two different meanings in the IT world: 1) Application Service Provider, and 2) Active Server Page. 1) Application Service Provider An Application Service Provider is a company or organization that provides software applications to customers over the Internet. These Internet-based applications are also known as "software as a service" (SaaS) and are often made available on a subscription basis. This means ASP clients often pay a monthly fee to use the software, rather than purchasing a traditional software license. Some SaaS applications can be accessed via a web browser, while others operate over a proprietary secure port. 2) Active Server Page An Active Server Page, commonly called an "ASP page," is a webpage that may contain scripts as well as standard HTML. The scripts are processed by an ASP interpreter on the web server each time the page is accessed by a visitor. Since the content of an ASP page can be generated on-the-fly, ASP pages are commonly used for creating dynamic websites. ASP is similar to other scripting platforms, like PHP and JSP, but supports multiple programming languages. While the default ASP language is VBScript, ASP pages can include other programming languages as well, such as C# and JavaScript. However, alternative languages must be defined before the script code using the following declaration: ASP pages are part of the ASP.NET web application framework developed by Microsoft. Therefore, ASP pages are most often found on Windows-based web servers that run Microsoft Internet Information Services, or IIS. You can tell if you are accessing an ASP page in your browser if the URL has an ".asp" or ".aspx" suffix. File extensions: .ASP, .ASPX Updated March 15, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/asp Copy

ASP.NET

ASP.NET ASP.NET is a set of Web development tools offered by Microsoft. Programs like Visual Studio .NET and Visual Web Developer allow Web developers to create dynamic websites using a visual interface. Of course, programmers can write their own code and scripts and incorporate it into ASP.NET websites as well. Though it often seen as a successor to Microsoft's ASP programming technology, ASP.NET also supports Visual Basic.NET, JScript .NET and open-source languages like Python and Perl. ASP.NET is built on the .NET framework, which provides an application program interface (API) for software programmers. The .NET development tools can be used to create applications for both the Windows operating system and the Web. Programs like Visual Studio .NET provide a visual interface for developers to create their applications, which makes .NET a reasonable choice for designing Web-based interfaces as well. In order for an ASP.NET website to function correctly, it must be published to a Web server that supports ASP.NET applications. Microsoft's Internet Information Services (IIS) Web server is by far the most common platform for ASP.NET websites. While there are some open-source options available for Linux-based systems, these alternatives often provide less than full support for ASP.NET applications. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/aspnet Copy

Aspect Ratio

Aspect Ratio An aspect ratio describes the relationship of an object's width to its height. It is commonly used in computing to describe the proportions of a rectangular screen. Aspect ratios are written as mathematical expressions, using the following format: width:height For instance, a monitor that is 20 inches wide by 15 inches tall has an aspect ratio of 20:15. After reducing the fraction (dividing each number by the lowest common denominator – in this case, 5), the aspect ratio is 4:3, or "four by three." 4:3 was the aspect ratio used by standard definition (SD) televisions, before HDTV. A square screen or image has an aspect ratio of 1:1. A screen that is twice as tall as it is wide has an aspect ratio of 1:2. If a screen 50% wider than it is tall, its aspect ratio is 3:2. Both HDTVs and 4K televisions have aspect ratios of 16:9 ("sixteen by nine"), meaning they are almost twice as wide as they are tall. An HDTV, for example, has a resolution of 1920x1080 pixels. The 16:9 aspect ratio of HD resolution can be verified using the math below. 1920 ÷ 16 = 120. 120 x 9 = 1080. Alternatively, 1920 ÷ 120 = 16. 1080 ÷ 120 = 9. 4K is simply twice the width and height of HD, or 3840x 2160. Therefore, 3840 ÷ 240 = 16. 2160 ÷ 240 = 9. While most modern televisions have 16:9 aspect ratios, other types of screens may be longer or taller. For example, many tablets and computer monitors have 16x10 (or 8x5) aspect ratios, which means they are slightly taller relative to a standard HD display. Smartphones often have extra long screens when held sideways (a.k.a. landscape view). For example, the Samsung Galaxy S8 has an aspect ratio of 18.5:9 and the iPhone X has an aspect ratio of 19.5:9. When watching HD video on these devices, the video does not fit the full width of the screen. Instead, black bars are displayed on the sides because the aspect ratio of the video is not as wide as the aspect ratio of the screen. NOTE: HD (16:9) is 25% wider than SD (4:3 or 12:9). That is why modern flatscreen displays appear wider than older CRT televisions. Updated December 2, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/aspect_ratio Copy

Assembler

Assembler An assembler is a program that converts assembly language into machine code. It takes the basic commands and operations from assembly code and converts them into binary code that can be recognized by a specific type of processor. Assemblers are similar to compilers in that they produce executable code. However, assemblers are more simplistic since they only convert low-level code (assembly language) to machine code. Since each assembly language is designed for a specific processor, assembling a program is performed using a simple one-to-one mapping from assembly code to machine code. Compilers, on the other hand, must convert generic high-level source code into machine code for a specific processor. Most programs are written in high-level programming languages and are compiled directly to machine code using a compiler. However, in some cases, assembly code may be used to customize functions and ensure they perform in a specific way. Therefore, IDEs often include assemblers so they can build programs from both high and low-level languages. Updated September 5, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/assembler Copy

Assembly Language

Assembly Language An assembly language is a low-level programming language designed for a specific type of processor. It may be produced by compiling source code from a high-level programming language (such as C/C++) but can also be written from scratch. Assembly code can be converted to machine code using an assembler. Since most compilers convert source code directly to machine code, software developers often create programs without using assembly language. However, in some cases, assembly code can be used to fine-tune a program. For example, a programmer may write a specific process in assembly language to make sure it functions as efficiently as possible. While assembly languages differ between processor architectures, they often include similar instructions and operators. Below are some examples of instructions supported by x86 processors. MOV - move data from one location to another ADD - add two values SUB - subtract a value from another value PUSH - push data onto a stack POP - pop data from a stack JMP - jump to another location INT - interrupt a process The following assembly language can be used to add the numbers 3 and 4: mov eax, 3 - loads 3 into the register "eax" mov ebx, 4 - loads 4 into the register "ebx" add eax, ebx, ecx - adds "eax" and "ebx" and stores the result (7) in "ecx" Writing assembly language is a tedious process since each operation must be performed at a very basic level. While it may not be necessary to use assembly code to create a computer program, learning assembly language is often part of a Computer Science curriculum since it provides useful insight into the way processors work. Updated September 5, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/assembly_language Copy

Assistive Technology

Assistive Technology Assistive technology refers to hardware and software designed to help people with disabilities. Some types of assistive technology provide physical assistance, while others provide helpful aids for individuals with learning disabilities. Examples of common assistive devices include hearing aids, wheelchairs, and prosthetics. Hearing aids amplify sound, helping individuals who have difficulty hearing. Modern hearing aids even filter out background noise and clarify speech, making conversation easier. Wheelchairs provide mobility for individuals who are unable to walk. Motorized wheelchairs provide a means of transportation for people with limited upper body function. Prosthetics can replace missing body limbs, such as arms or legs. Some modern prosthetics even allow people to control appendages, such as the fingers on a prosthetic hand. Software designed to help individuals with physical limitations is often called "Accessibility" software. Popular operating systems, such as Windows, OS X, and iOS include several accessibility features. Some examples include: Text to Speech - A computer can speak text for people with visual impairments. It also provides a way for mute individuals to communicate with others. Speech to Text - Also called dictation, this feature translates spoken words into text for people who have difficulty using a keyboard. Some operating systems allow users to speak common commands such as opening or quitting programs. Voiceover - Some operating systems can speak descriptions of items when the user selects them or moves the cursor over them. Screen Zoom - Keyboard shortcuts can be used to zoom into different areas of the screen, increasing the size of text and images. Display Enhancements - Inverting colors and increasing contrast can make it easier for individuals with limited vision to see the screen. Assistive software may also be designed for educational purposes. For example, a specialized reading program may help students with dyslexia. Math tutor programs can provide a way for students to learn mathematical concepts at a comfortable pace. Memory applications can help individuals with brain injuries restore their memorization capabilities. NOTE: While not designed as assistive technology, touchscreen devices such as tablets are commonly used as assistive devices since they provide a natural user interface. Updated October 21, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/assistive_technology Copy

ATA

ATA Stands for "Advanced Technology Attachment." ATA was a standard interface for connecting hard drives and floppy disk drives to a computer. It integrated the controller into the drive itself so that an ATA drive did not require a separate controller card to connect to a motherboard. Following the introduction of the SATA interface in the early 2000s, the ATA interface was retroactively renamed "PATA" for "Parallel ATA." The ATA interface used 40-pin ribbon cables to connect drives to the motherboard. These cables could include three connectors, allowing two drives to share a single cable. For the motherboard to send data to the correct drive, ATA drives had to be designated "Device 0" (the primary drive) or "Device 1" (the secondary drive) using jumpers located on the drive next to the ATA connector. Only one drive on a cable could communicate with the system at a time, and Device 0 always had higher priority. Later revisions of the ATA specification added a "cable select" mode that automatically designated roles based on which connector a drive was attached to. A Seagate ATA hard drive The ATA standard was updated several times in the 1990s. Each new revision added new DMA transfer modes to improve drive performance. ATA-4 added ATA Packet Interface (ATAPI) commands that added support for other kinds of storage drives — optical drives, tape drives, and high-capacity removable disk drives. The final revisions, ATA-7 and ATA-8, supported data transfer speeds up to 133 Megabytes/second. ATA was eventually made obsolete by the SATA interface, which offered faster transfer speeds, a smaller connector, and support for hot swapping. NOTE: The ATA interface was also known as the Integrated Drive Electronics (IDE) interface. Updated November 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ata Copy

ATM

ATM Stands for "Asynchronous Transfer Mode." In computer networking, ATM refers to a high-speed networking technology designed to transmit data efficiently. Asynchronous Transfer Mode transmits data in fixed-size packets called cells, each 53 bytes long (5 bytes for the header and 48 bytes for the payload). This small, uniform cell size allows for rapid processing and routing through ATM switches, enabling data transfer rates up to 622 Mbps. Originally developed to handle diverse types of traffic such as voice, video, and data, ATM supports the simultaneous transmission of different types of media. Its fixed cell size helps minimize latency and jitter, making it well-suited for real-time applications like video conferencing and VoIP calls. ATM technology was beneficial in the early days of broadband internet, allowing Internet Service Providers (ISPs) to efficiently manage bandwidth by allocating fixed amounts to each user. It pioneered QoS (Quality of Service) technology, which ensured a balanced distribution of network resources, improving reliability and speed for all users. Over the past few decades, ATM has been supplanted by newer technologies such as Ethernet routing and MPLS (Multiprotocol Label Switching). NOTE: Outside the computing world, an "ATM" typically refers to an "Automated Teller Machine," a device used to complete banking transactions, such as cash withdrawals. Updated January 14, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/atm Copy

Attachment

Attachment An attachment, or email attachment, is a file sent with an email message. It may be an image, video, text document, or any other type of file. Most email clients and webmail systems allow you to send and receive attachments. To send an attachment along with your email, you can use the "Attach" command, then browse to the file you want to attach. In some email interfaces, you can simply drag a file into the message window to attach it. When you receive an attachment, most email programs allow you to view the attachment in place or save it to your local storage device. While modern email programs make it easy to send and receive attachments, the original email system (SMTP) was actually not designed to handle binary files. Therefore, attachments must be encoded as text in order to be transferred with an email message. The most common encoding type is MIME (Multi-Purpose Internet Mail Extensions). While MIME encoding makes it possible to send messages with emails, it typically increases the file size of the attachment about 30%. That's why when you attach a file to an email message, the file size of the attachment appears larger than the original file. You can attach multiple files to a single email message. However, the maximum size of the combined attachments is limited by the sending and receiving mail servers. In other words, the size of the attachment(s) after being encoded cannot be larger than the limit of either the outgoing or incoming mail server. In the early days of email, attachments were limited to one megabyte (1 MB). Today, many mail servers allow attachments larger than 20 MB. However, to protect against viruses and malware, many mail servers will not accept executable file types, such as .EXE or .PIF files. If you need to send an executable file to someone, you can compress the file as a .ZIP archive before attaching it to the email message. NOTE: Even a large amount of text takes up a small amount of space compared to most binary files. Therefore, attaching a document to an email may increase the size substantially. For example, a typical email may only require one kilobyte (1 KB) of disk space. Attaching a single 1 MB file will make the message 1,000 times larger. Therefore, it is best to share large files using another method like FTP or DropBox. Additionally, if you have almost reached your email quota on your mail server, you can free up a lot of space by deleting old attachments. Updated April 24, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/attachment Copy

ATX

ATX Stands for "Advanced Technology eXtended." ATX is a motherboard specification that defines the board's physical dimensions, connector placement, I/O ports, and supported power supplies. Intel introduced the ATX specification to replace the previous AT standard for desktop PCs. Several variations of the ATX have been introduced since then and are common in desktop computers. The ATX specification introduced several significant improvements to motherboard design. It standardized the position of the I/O panel, directly integrating ports into the motherboard. It also moved the CPU and RAM slots to a location where they wouldn't interfere with full-length expansion cards. It sets requirements for compatible power supplies to include a series of connectors to provide power to the motherboard, processor, and expansion cards. It moved the connector for storage devices closer to the location of the drive bays to shorten the length of cables needed. ATX motherboards and cases also provide better airflow through the chassis than older designs. An ATX motherboard, showing the layout of its I/O panel The ATX specification defines the location of the mounting holes in a computer case, so any ATX motherboard will fit in an ATX case. It also specifies several different-sized variations — some are larger to support more expansion ports, while others are smaller to fit in compact cases: FlexATX – 9 × 7.5 in (229 × 191 mm) MicroATX – 9.6 × 9.6 in (244 × 244 mm) ATX — 12 × 9.6 in (305 × 244 mm) Extended ATX (EATX) – 12 × up to 13 in (305 × up to 330 mm). Many manufacturers make motherboards smaller than the full size allowed by the standard. NOTE: Since the different sizes use the same series of mounting holes, smaller-sized motherboards (like MicroATX) will fit in cases designed for larger ATX motherboards. Updated January 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/atx Copy

Augmented Reality

Augmented Reality Augmented reality, commonly abbreviated "AR," is computer-generated content overlaid on a real world environment. AR hardware comes in many forms, including devices that you can carry, such as handheld displays, and devices you wear, such as headsets, and glasses. Common applications of AR technology include video games, television, and personal navigation, though there are many other uses as well. Pokémon Go is a popular video game that uses augmented reality. The app, which runs on iOS and Android devices, uses your smartphone's GPS signal to detect your location. As you walk around, your avatar is overlaid on a real-world map, along with in-game content, such as Pokéstops, gyms, and Pokemon that appear. When you attempt to catch a Pokémon, it shows up with a real-word background created by your smartphone's camera. The camera can be turned on or off using the "AR" toggle switch. Augmented reality is also used in television, especially in sports. For example, golf broadcasts sometimes display a line on the screen that tracks the flight of the ball. NFL games display a first down line overlaid on the field. Major league baseball games often display dynamically generated ads behind home plate. In olympic races, a world record (WR) line is displayed during some races to show how close an athlete is to a record time. In navigation, AR is used to display location information in real-time. This is typically done through a heads-up display (HUD) that projects images in front of you like a hologram. For instance, HUD in an automobile might display your speed, engine RPMs, and other useful data. Google Glass, a head-mounted display, can overlay directions from Google Maps and identify locations using the built-in camera. AR vs VR While augmented reality and virtual reality share some things in common, they are two different technologies. AR augments reality, but doesn't replace it. VR completely replaces your surroundings with a virtual environment. Therefore, any hardware that combines digital content with your actual surroundings is an AR device. Hardware that operates independently from your location and encompasses your vision is a VR device. Updated August 18, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/augmented_reality Copy

AUP

AUP Stands for "Acceptable Use Policy." An AUP is list of rules you must follow in order to use a website or Internet service. It is similar to a software license agreement (SLA), but is used specifically for Internet services. Most well-known, high traffic websites include an AUP, which may also be called Terms of Service (TOS) or Terms of Use (TOU). You can often find a link to the to the website's AUP in the footer of the home page. Many web services, such as cloud applications require you to agree to an AUP in order to use the online service. ISPs often provide an AUP with each account, which states specific guidelines you must follow. The specifics of an AUP vary depending on the service offered. Even website AUPs may differ greatly based on the purpose of the website and the website's content. However, most AUPs include a list of general dos and don'ts while using the service, such as the following: Do not violate any federal or state laws. Do not violate the rights of others. Do not distribute viruses or other malware. Do not try to gain access to an unauthorized area or account. Respect others' copyrights and intellectual property. Familiarize yourself with the usage guidelines and report violations. An AUP serves as an agreement between the user and the company offering the online service. Some rules are basic netiquette, while others may have legal ramifications. If you fail to comply with a policy in a AUP, the company has the right to suspend or terminate your account or take legal action if necessary. Therefore, it is wise to familiarize yourself with the AUPs of the Internet services you use. Updated January 16, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/aup Copy

Authentication

Authentication In computing, authentication is the process of verifying the identity of a person or device. A common example is entering a username and password when you log in to a website. Entering the correct login information lets the website know 1) who you are and 2) that it is actually you accessing the website. While a username/password combination is a common way to authenticate your identity, many other types of authentication exist. For example, you might use a four or six-digit passcode to unlock your phone. A single password may be required to log on to your laptop or work computer. Every time you check or send email, the mail server verifies your identity by matching your email address with the correct password. This information is often saved by your web browser or email program so you do not have to enter it each time. Biometrics may also be used for authentication. For example, many smartphones have a fingerprint sensor that allows you to unlock your phone with a simple tap of your thumb or finger. Some facilities have retinal scanners, which require an eye scan to allow authorized individuals to access secure areas. Apple's Face ID (introduced with the iPhone X) authenticates users by facial recognition. Two-Factor Authentication Authentication is a part of everyday life in the digital age. While it helps keep your personal information private, it is not foolproof. For example, if someone knows your email address, he or she could gain access to your account by simply guessing your password. This is why it is important to use uncommon, hard-to-guess passwords, especially for your email accounts. It is also a good idea to use two-factor authentication when available, as this provides an extra security check when accessing your account. Two-factor authentication (also "2FA") typically requires a correct login plus another verification check. For example, if you enable 2FA for your online bank account, you may be required to enter a temporary code sent to your phone or email address to complete the login process. This ensures that only you (or someone with access to your phone or email account) can access your account, even after entering the correct login information. NOTE: In many cases, you can select the "Remember Me" checkbox when logging in to a secure website. This will store a cookie on your device (computer, phone, tablet, etc.), which indicates the device is trusted, or previously authenticated, for that website. The cookie may keep you logged in over multiple browser sessions or may prevent the need for two-factor authentication when using that device in the future. Updated July 13, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/authentication Copy

Autocomplete

Autocomplete Autocomplete, also known as autosuggest or search suggest, is a feature that provides predictions as you type in a text box. It is commonly associated with search engines, though it may be used for other purposes as well. One of the most common places you will see autocomplete is in the address bar of your web browser. Since most browsers allow you to perform searches directly from the address bar, you will likely see search suggestions appear as you begin typing. However, since you can also type URLs in the address bar, your browser may also suggest webpages from your browsing history or bookmarks, or other popular websites. If you visit a search engine website, like Google or Bing, you'll most likely be presented with a list of search suggestions immediately as you start typing in the search box. These suggestions come primarily from a history of aggregate users searches recorded by the search engine. However modern autocomplete algorithms may use other information as well. For example, a search engine may use your location to provide relevant queries to your surroundings. It may take into account the current time or day to provide suggestions relevant to the time of your search. If you are logged in or allow cookies to store your browsing information, the search engine may use your browsing or search history to provide more targeted search suggestions. Autocomplete provides helpful search ideas, but it is also a time-saver. You can simply click on the suggestion you want to use instead of typing the full search phrase. You can also press the down arrow on your keyboard to scroll through the list of suggestions, then press Enter to choose one. If you are using a mobile device, you can simply tap the suggestion you want to use. While autocomplete is a popular feature for web search, many other websites use it as well. For example, websites like Amazon and Best Buy provide suggestions as you search, which are relevant to your browsing or purchase history. Even this website, TechTerms.com, uses autocomplete to provide a list of commonly searched tech terms as you type. You can try it for yourself by typing in the search field at the top of the page! NOTE: The boxes that display word suggestions as you type in Android and iOS are similar to autocomplete, but they are used for more general purposes, such as texting and composing documents. In Android, this feature is called predictive text, while in iOS, it is called QuickType. Updated December 29, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/autocomplete Copy

Autocorrect

Autocorrect Autocorrect is a software feature that corrects misspellings as you type. It is integrated into mobile operating systems like Android and iOS and is therefore a standard feature on most smartphones and tablets. It is also supported by modern desktop operating systems, such as Windows and macOS. Autocorrect makes it easier to type words on a mobile device with a touchscreens. Since these devices often have tiny onscreen keyboards, it is easy to make mistakes as you type. Autocorrect can fix these mistakes on the fly, helping you type faster without having to go back and correct your typos. Each autocorrect algorithm is different, but the goal is the same — to replace misspellings with what you intended to type. In some cases, the algorithm may simply match the closest word (such as replacing "mispeling" with "misspelling"). In other cases, it may use guess what letter(s) you intended to type based on nearby keys on the keyboard. For example, when you are typing a text message, your phone might replace "see you latet" with "see you later." While autocorrect is often associated with mobile devices, it is also found in modern desktop operating systems. Since the technology is included at the OS level, most applications, such as text editors and word processors include autocorrect support. While the algorithm on a desktop computer may be different than the one a smartphone, they both work to correct errors as you type. Autocorrect can improve your typing efficiency and has the added benefit of teaching you the correct spellings of words that you spell incorrectly. Most of the time, autocorrect fixes legitimate errors. However, sometimes it may try to spell a word differently than you would like. Before your device autocorrects the text, it will usually display the suggested replacement. If you prefer to use your own spelling, you can click the "x" next to the suggestion to prevent the autocorrection of the word. Otherwise autocorrect will override your version when you press space bar, Enter, or a punctuation mark. If you prefer not to be autocorrected all, you can turn off the autocorrect feature in your device settings. Below are a few examples: Windows 10 - Settings → Devices → Typing → Spelling → Turn off "Autocorrect misspelled words" macOS - System Preferences → Keyboard → Text → Uncheck "Correct spelling automatically" iOS - Settings → General → Keyboards → Turn off "Auto-Correction" Android - Settings → Language & input → Google Keyboard → Text-correction → Turn off "Auto-correction" Updated January 12, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/autocorrect Copy

Autofill

Autofill Autofill is a software function that automatically enters data in web forms and spreadsheets. It should not be confused with autocomplete or autocorrect, which perform separate functions. Autocomplete finishes words or phrases while typing, and autocorrect automatically fixes spelling mistakes. While both web browsers and spreadsheet applications support autofill, the functionality is different. Below is an explanation of how autofill works in each type of application. Web Browser Autofill Any time you enter information on a website, you fill out a form. Some forms, such as login forms, only have two fields (for a username and password). Others have several fields that you must complete before submitting the form. Most online registration and purchase forms require standard information, such as your name, email address, phone number, and home address. Autofill saves this information in your browser and allows you to fill in common form fields with a single click. Autofill can also save other types of information, such as website logins and credit card numbers. Most browsers securely store usernames and passwords for different websites. Once your login information has been saved, you can use autofill to fill in the username and password fields with a single click. Spreadsheet Autofill Autofill in spreadsheet applications, such as Microsoft Excel, provides an easy way to fill in empty cells. After entering information in one or two cells, you can drag the fill handle (the small box in the lower-right corner of the current selection) across multiple cells to automatically fill them in with similar data. Autofill can be used for both rows (horizontally) and columns (vertically). For example, if you enter "1" in cell A1, then drag the fill handle downwards, autofill will automatically fill in the selected cells with the number 1. If you enter "1" in A1 and "2" in A2, then select both A1 and A2, then drag the fill handle downwards, it will increment the values in the cells. A3 will become "3," A4 will become "4," etc. If you enter "2" in A1 and "4" in A2, the values will increment by two. For example, A3 will become "6," A4 will become "8," etc. Autofill can detect several different patterns, which can save time when creating new spreadsheets. For example, if you need to enter years as rows and months as column headers, you can use autofill to complete most of the cells. Simply type the first year, e.g., "2001," in the first cell, followed by "2002" below it. Select the two cells, then drag the fill handle downwards to add as many years as you want. To enter months, type "January" in one cell, followed by "February" in the cell next to it. Then select both cells and drag the fill handle to the right to fill in all twelve months. Most spreadsheet applications will wrap from December to January if you continue dragging to the right. NOTE: Autofill is also called "AutoFill" in Apple Safari and "Auto Fill" in some versions of Microsoft Excel. Updated November 6, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/autofill Copy

Autoresponder

Autoresponder An autoresponder is a script that automatically replies to emails sent to a specific email address. It may be used for away messages, email confirmations, or for several other purposes. Autoresponders can be configured on a mail server or using an email client. When configured on a mail server, the server automatically sends response emails while the autoresponder is active. Server-based autoresponders are often configured using a webmail interface. For example, Gmail provides a "vacation reply" for this purpose. They can also be created by a server administrator for one or more email addresses. cPanel, a popular web hosting platform for Linux, allows admins to manage autoresponders by logging into the control panel for a specific account and selecting Email → Autoresponders. To set up an autoresponder using a mail client, you typically create a "rule." For example, you can add a rule that automatically replies to messages sent to a specific email address. This type of rule works well once it has been configured, but the mail client must be open in order for the autoresponder to work. If you set up a vacation reply in your mail program on your home computer, then turn off your computer before you leave, the autoresponder will not work. When setting up an auto-reply on a server, you may be able to enter a date range for when it is active. If the autoresponder does not turn off automatically, it is a good idea to set a reminder for yourself to turn it off when you return. NOTE: Email "bounce" messages are sent automatically, but are not considered autoresponders since they are not configured by users. Updated August 7, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/autoresponder Copy

Avatar

Avatar Generally speaking, an avatar is the embodiment of a person or idea. However, in the computer world, an avatar specifically refers to a character that represents an online user. Avatars are commonly used in multiplayer gaming, online communities, and web forums. Online multiplayer role-playing games (MMORPGs) such as World of Warcraft and EverQuest allow users to create custom characters. These characters serve as the players' avatars. For example, a World of Warcraft player may choose a Paladin with blue armor as his avatar. As the player progresses in the game, his character may gain items and experience, which allows the avatar to evolve over time. Avatars are also used in online communities, such as Second Life and The Sims Online. These avatars can be custom-designed to create a truly unique appearance for each player. Once a user has created an avatar, he or she becomes part of an online community filled with other users' avatars. Players can interact with other avatars and talk to them using text or voice chat. It's no surprise that "Second Life" refers to a virtual life that players live through their avatars. Finally, avatars are common in web forums. Online discussion boards typically require users to register and provide information about themselves. Many give users the option to select an image file that represents the user's persona. This image, combined with a made-up username, serves as a person's avatar. For example, a user may select a picture of a Pac-Man and choose the name "pac32" for his avatar. This avatar typically appears next to each posting the user contributes in an online forum. Regardless of the application, avatars allow people to represent themselves online in whatever way they want. They may be considered alter-egos, since users can customize characters that are completely different than their actual personas. Of course, what's the point of having a "second life" if it's the same as reality? Updated April 27, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/avatar Copy

AVR

AVR Stands for "Audio/Video Receiver." An AVR, often called a receiver, is the central routing and processing component in a home theater. It can receive signals from connected components and route them to different devices. AVRs are also sometimes referred to as "amplifiers," since one of their primary functions is to amplify an audio signal before sending it to the speakers. In a typical home theater setup, all devices are connected to the HDMI ports on the back of the AVR. The audio is routed to speakers, such as Dolby 5.1 surround system (five speakers plus one subwoofer). The video is typically output to a television. In a modern home theater, the TV may serve as a monitor since the audio is processed by the AVR and video input is handled by a cable box, Apple TV, or another device. Smart TVs are an exception since they are both an input device (sending audio and video data to the AVR) and an output device (displaying video from built-in apps or other devices. History of AVRs Early receivers were not called AVRs since they only handled audio signals. The inputs and outputs were primarily analog, except for a optical audio connection such as a Toslink or S/PDIF port. Eventually, receivers were built to route video signals as well as audio. As digital devices became more common, receivers started to serve a more primary role as the central digital controller of a home theater system. Since HDMI enables bidirectional communication, devices can now communicate with each other. For example, an AVR can tell a television to turn on or off and a TV can tell an AVR to change the volume. Digital device commands can be synced through an AVR using an HDMI standard called HDMI Consumer Electronics Control or "HDMI-CEC." Different manufacturers use different names for this technology, including Bravia Sync (Sony), Anynet+ (Samsung), and SimpLink (LG), but most brands work with components made by other manufacturers. Modern AVRs Modern AVRs are far more functional than older receivers that simply amplified signals from radio transmissions, audio tapes, and CD players. They now serve as the control center for most home theaters. Many AVRs also support wireless technologies such as Wi-Fi and Bluetooth, which allows you to stream music wirelessly to speakers connected to the receiver. Updated October 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/avr Copy

Azure

Azure Azure is a cloud computing platform built and operated by Microsoft. It allows companies to run applications and host content in the cloud. While Microsoft designed Azure to support enterprise computing requirements, the service is also available to small businesses and individuals. The Azure infrastructure consists of multiple data centers distributed around the world. The global network of computers provides low latency and high reliability regardless of where users access the service. Azure includes multiple types of servers, including Windows Server, Linux, and SAP HANA machines. Below are a few examples of the 200+ Azure solutions: SaaS - running web applications and Internet-based services CDN - delivering websites and streaming content from edge nodes around the world E-commerce - providing customized shopping cart experiences localized for each visitor Database storage - enabling high-speed data storage and access Software testing - providing access to virtual machines that run different operating systems Because of the benefits of a globally-distributed computing network, many businesses have moved locally-hosted data to a cloud service like Azure. While migrating data and applications to Azure may require several steps, starting the process is simple. Developers can sign up for an Azure account, create a new cloud computing "instance," and start using the service immediately. Updated March 24, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/azure Copy

Backbone

Backbone A backbone network is a robust, high-speed network that links multiple local networks into a single wide-area network. Similar to how a person's backbone carries signals to and from smaller groups of nerves in their nervous system, a backbone network forms the central communications infrastructure that transmits data between networks. The largest backbone network connects ISPs around the world to create the Internet. Computers on a network do not connect to a network backbone directly. Instead, each local network connects to its backbone through an edge router, which directs data packets to their destination using the most efficient route possible. A backbone typically uses faster transmission lines than an individual network — often a trunk of multiple fiber-optic cables. A connection to the backbone often maximizes its speed using link aggregation, which connects a device to several separate lines and sends data across them in parallel. For example, an edge router may link to five 10-gigabit fiber optic lines in aggregate to provide a single 50-gigabit connection. A map of global fiber backbone networks and Internet exchange points The backbone network that forms the Internet consists of trunks of high-speed fiber optic cables owned by several global telecommunications companies, categorized as Tier 1 ISPs. All Tier 1 ISPs work together through peering agreements that allow the free flow of traffic between their networks via hundreds of high-speed linkages, called Internet exchange points, located around the world. Tier 1 ISPs provide Internet backbone connections to medium-sized (Tier 2) ISPs, who in turn provide service to large enterprises, data centers, and regional (Tier 3) ISPs. Updated July 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/backbone Copy

Backdoor

Backdoor A backdoor is a security vulnerability in a computer system, such as a web server or home PC. It is a hidden way to access a system that bypasses typical authentication methods. The term may be used as a noun or an adjective, such as "backdoor hack" or "backdoor access." Security holes can exist in operating systems and individual applications. Operating system vulnerabilities are the most serious because once a hacker gains system-level access, he may have full access to all programs and data on the system. Application security holes are also serious since they may allow hackers to view or alter data on a computer and may more difficult to detect. Examples of malware that may facilitate backdoor access include: Viruses Rootkits Trojans Spyware In some cases, backdoor hacks are immediately apparent since they affect the functionality of the system. In other cases, they may go undetected for hours, days, or even months. Long-term access is especially problematic since it provides hackers extra time to access, alter, and download data. For this reason, servers often have monitoring software that checks for unrecognized processes running on the system. Most antivirus software runs similar security checks. Backdoor Attacks vs Other Cyberattacks A backdoor attack is the opposite of a brute force attack, which attempts to gain access to a system through the standard login interface. Backdoor access bypasses the typical username and password authentication and gains access to the underlying system. Viruses and other malware are not considered backdoor hacks, but instead, they facilitate backdoor access. For example, a malware program may install a script that provides a hacker with admin access to the system, even after the malware has been removed. Hackers can also gain backdoor access through a command-line interface, using SSH or other text-based commands. Updated March 30, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/backdoor Copy

Backend

Backend In the computer world, the "backend" refers to any part of a website or software program that users do not see. It contrasts with the frontend, which refers to a program's or website's user interface. In programming terminology, the backend is the "data access layer," while the frontend is the "presentation layer." Most modern websites are dynamic, meaning webpage content is generated on-the-fly. A dynamic page contains one or more scripts that run on the web server each time the page is accessed. These scripts generate the content of the page, which is sent to the user's web browser. Everything that happens before the page is displayed in a web browser is part of the backend. Examples of backend processes include: processing an incoming webpage request running a script (PHP, ASP, JSP, etc.) to generate HTML accessing data, such as an article, from a database using an SQL queries storing or updating records in a database encrypting and decrypting data handling file uploads and downloads processing user input via JavaScript All of the examples above, besides the last one, are server-side processes that run on the web server. JavaScript is a client-side process, meaning it runs in the web browser. JavaScript may be considered a backend or a frontend process, depending on if the code affects the user interface or not. The backend and frontend work together to create the full user experience. Data generated in the backend is passed to the frontend and presented to the user. While some organizations have separate backend and frontend development teams, the line between the two layers is rarely black and white. Therefore, many developers write code for both the backend and frontend. This is known as full-stack development. NOTE: Backend may also be written "back end" (as a noun) or "back-end" (as an adjective). For simplicity, "backend" (the closed compound word) has become an acceptable term for both. Updated April 11, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/backend Copy

Backlink

Backlink A backlink is an incoming link from an external website to specific webpage. For example, if you publish a webpage and 20 other websites link to it, your webpage has 20 backlinks. Links to the page from within your own website are not included in the backlink total. Web developers benefit from backlinks (or "inlinks") in two different ways — direct traffic and search result placement. As more links to a specific webpage are published on external sites, there is greater potential for traffic to be generated from other websites. This is called direct traffic. By increasing direct traffic, a website can gradually grow its presence on the Web and generate a steady stream of visitors from other websites. While direct traffic is helpful, most websites generate the majority of their traffic through search engines. Since search engines use backlinks as an important part of the their algorithms for search result placement, external links are important for good search ranking. Therefore, generating backlinks has become common practice for search engine optimization, or SEO. The more backlinks a webpage has, the better the chance that the page will rank highly in search results for relevant keywords. If a website has many pages that have backlinks, the overall number of incoming links may help increase the ranking of all pages within the website. While most backlinks point to a website's home page, incoming links to other pages within the website are beneficial as well. Updated December 17, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/backlink Copy

Backside Bus

Backside Bus There are two types of buses that carry data to and from a computer's CPU. They are the frontside bus and backside bus. Surprisingly, there is no correlation between these and the backside and frontside airs that snowboarders talk about. While the frontside bus carries data between the CPU and memory, the backside bus transfers data to and from the computer's secondary cache. The secondary, or L2 cache stores frequently used functions and other data close to the processor. This allows the computer's CPU to work more efficiently since it can repeat processes faster. When the processor needs information from the L2 cache, it is sent over the backside bus. Because this process needs to be extremely fast, the clock speed of the backside bus cannot afford to lag behind. For this reason, the backside bus is often as fast as the processor. The frontside bus, on the other hand, is typically half the speed of the processor or slower. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/backsidebus Copy

Backup

Backup Backup is the most important computer term you should know. A backup is a copy of one or more files created as an alternate in case the original data is lost or becomes unusable. For example, you may save several copies of a research paper on your computer as backup files in case you decide to use a previous revision. Better yet, you could save the backups to a USB flash drive, which would also protect the files if the hard drive failed, or even save a copy to a cloud storage site in case an accident causes damage to both the computer and flash drive. Most computer components, like hard drives and solid-state drives, can run for years without crashing. However, like all electronic devices, they are not immune to problems. An electrical short or physical damage (especially when it comes to laptops and mobile devices) can cause data to become unrecoverable, so it's wise to maintain backups of important files on multiple devices. Hardware malfunctions are not the only reason to maintain backups. Software problems can also damage your files. File system corruption can damage directory structures and cause entire folders to disappear. You may mistakenly delete files, or a virus may corrupt or encrypt them. Program installation conflicts can make applications or files unusable, and software bugs can accidentally delete files and folders. If you don't have backups of your important files, an unexpected event could erase them for good. How to Make Backups So how do you back up your data? Most operating systems include built-in backup features that help you get started, but there are some best practices to follow that can keep your data safe in case of disaster or emergency. Many experts recommend a backup strategy known as the 3-2-1 rule. First, keep three copies of your data — one primary copy and two backup copies. Keep those backups on two different types of media — for example, one on an external hard drive and another on a flash drive. Finally, keep one of those backups at a different location in case of theft, fire, flood, or other disasters. The simplest backup method is to manually copy files from the primary location to a backup location. However, that method relies on you making frequent backups whenever data changes — if you lose a file and the most recent backup is two months old, you've lost all of that progress. Most operating systems now include built-in backup utilities that automate the process, making frequent backups without any effort on your part. Make sure that the backup includes the folders you're regularly saving files to, and that any external disk you're backing up to remains connected. Many backup utilities, like macOS's Time Machine, allow you to restore a backup from a specific date, rolling back to a version of a file before any unwanted changes. In addition to a local backup, maintaining an offsite cloud backup is an important part of a backup strategy. Like a local backup utility, cloud backup programs also automate the creation of backups and regularly sync specified folders to cloud storage. You can restore an entire backup or just a single file, and even access backed-up files from another computer. Since these backups are in the cloud, they keep your files safe even if both your computer and primary backup device are lost, stolen, or damaged. Updated January 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/backup Copy

Bandwidth

Bandwidth Bandwidth describes the maximum data transfer rate of a network or Internet connection. It measures how much data can be sent over a specific connection in a given amount of time. For example, a gigabit Ethernet connection has a bandwidth of 1,000 Mbps (125 megabytes per second). An Internet connection via cable modem may provide 25 Mbps of bandwidth. While bandwidth is used to describe network speeds, it does not measure how fast bits of data move from one location to another. Since data packets travel over electronic or fiber optic cables, the speed of each bit transferred is negligible. Instead, bandwidth measures how much data can flow through a specific connection at one time. When visualizing bandwidth, it may help to think of a network connection as a tube and each bit of data as a grain of sand. If you pour a large amount of sand into a skinny tube, it will take a long time for the sand to flow through it. If you pour the same amount of sand through a wide tube, the sand will finish flowing through the tube much faster. Similarly, a download will finish much faster when you have a high-bandwidth connection rather than a low-bandwidth connection. Data often flows over multiple network connections, which means the connection with the smallest bandwidth acts as a bottleneck. Generally, the Internet backbone and connections between servers have the most bandwidth, so they rarely serve as bottlenecks. Instead, the most common Internet bottleneck is your connection to your ISP. NOTE: Bandwidth also refers to a range of frequencies used to transmit a signal. This type of bandwidth is measured in hertz and is often referenced in signal processing applications. Updated May 16, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bandwidth Copy

Banner Ad

Banner Ad Whether you like it or not, much of the Web is run by advertising. Just like television or radio, websites can offer free content by generating revenue from advertising. While you may get tired of Web ads from time to time, most people would agree that seeing a few advertisements here and there is better than paying a usage fee for each website. Perhaps the most prolific form of Web advertising is the banner ad. It is a long, rectangular image that can be placed just about anywhere on a Web page. Most banner ads are 468 pixels wide by 60 pixels high (468x60). They may contain text, images, or sometimes those annoying animations that make it hard to focus on the page's content. Regardless of the type of banner ad, when a user clicks the advertisement, he or she is redirected to the advertiser's website. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bannerad Copy

BarCraft

BarCraft A BarCraft is an event in which people gather to watch live StarCraft 2 competitions. Most BarCraft gatherings take place in sports bars, though the term can be used to describe any event in which a group of people watch StarCraft 2 together. StarCraft 2 (also "SC2") is a popular real-time strategy (RTS) game in which players choose to control one of three races. When a game starts, each player begins with a base at a fixed location on a map. Players then build structures, create units, and attempt to attack and defeat other players on the map. 1v1, 2v2, 3v3, 4v4, and FFA (free for all) games are possible, though 1v1 games are the most common. Since StarCraft 2 is one of the most popular eSports games, professional tournaments are held on a regular basis. Most BarCrafts are organized around prominent StarCraft 2 tournaments, such as MLG, NASL, or GSL competitions. These tournaments are streamed live over the Internet and often require a subscription to watch, similar to pay-per-view TV. Venues that host BarCrafts typically cover the viewing fee, but may charge patrons an admittance fee to watch the games. Most BarCraft venues stream the SC2 matches live on one or more screens. Updated August 16, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/barcraft Copy

Bare Metal

Bare Metal A bare metal system is a computer that does not include any software. This means the user must install an operating system in order for the hardware to be functional. The term "bare metal" is commonly used to refer to custom-built PCs and barebones systems. These types of computers are ideal for PC enthusiasts, who may prefer to install their own software rather than paying extra for pre-loaded programs. A bare metal system also allows advanced users to choose a non-mainstream OS, such as a specific Linux distribution. Web hosting companies may also offer bare metal systems to clients who wish to customize their web server from the ground up. For instance, a client may purchase a dedicated or co-located server with a specific hardware configuration. The server administrator can then install a specific operating system and web server software on the computer. NOTE: Bare metal is sometimes used to refer to a "bare metal restore," which refers to restoring a complete computer software configuration. Updated January 8, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bare_metal Copy

Bare Metal Restore

Bare Metal Restore A bare metal restore is a type of computer restoration process that restores the full software configuration from a specific system. It may be used to restore a computer system from a backup or simply migrate the software configuration from one machine to another. The terms "restore" and "bare metal restore" are often used interchangeably, since they both refer to bringing a computer back to a specific state. However, a bare metal restore is unique in that it can be used to restore a specific software configuration to a dissimilar hardware configuration. For example, a bare metal restore may used to restore a Linux system to a new computer made by a different manufacturer. I can also be used to restore a complete system to a new hard drive with different partition sizes. A bare metal restore is similar to a disk image restore, since both types of restores are used to rebuild a computer's software from scratch. However, a disk image restore simply copies the data bit-for-bit to a specific storage device. This may cause problems if the new hardware does not support certain configurations contained in the disk image. Therefore, web hosting companies and network admins often create "bare metal backups" for their clients, which can be used to perform a bare metal restore. Updated January 8, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bare_metal_restore Copy

Barebones

Barebones A barebones PC is a computer that has minimal components. A typical barebones system includes a case, motherboard, CPU, hard drive, RAM, and power supply. Most barebones systems are sold as kits, in which the components must be assembled by the user. Since barebones PCs usually do not come preassembled, they are not designed for the average computer user. Instead, barebones kits are aimed at computer enthusiasts and users who prefer to build their own PCs. By purchasing only specific components, a user can fully customize his computer system and avoid paying for unwanted extras. For example, if you buy a barebones kit, you can choose your own keyboard and mouse, since they are not included. Also, no software is bundled with barebones systems, so you avoid paying for software you don't need. Additionally, most barebones systems do not come with an operating system, so you can choose to install any operating system that is compatible with the hardware. Nearly all barebones PCs are desktop computers since they are the most customizable. However, some companies also offer barebone laptop systems, or "barebooks," which can be homebuilt. While Macintosh computers now use most of the same components as PCs, the Mac OS X operating system requires proprietary hardware to run. Therefore, barebones systems are generally built to run either Windows or Linux. Updated June 16, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/barebones Copy

Base Station

Base Station A base station is a fixed wireless device that serves as a hub for other wireless devices and provides a bridge to another network. In a computer networking context, a base station broadcasts a wireless signal that allows other devices to join a local network. However, the term is also commonly used in the telecommunications industry to refer to cellular towers, wireless landline telephone docks, and other radio transceivers. A Wi-Fi base station, also known as an access point, is a central piece of infrastructure in a Wi-Fi network. Every wireless device first connects to the base station, which allows them to communicate with other devices on that network. A base station also provides a bridged connection to another network, like an Ethernet-wired LAN or the Internet. Some base stations include multiple Ethernet ports that allow them to also serve as a router for a wired network, while others only provide a bridge and single Ethernet port to connect to another router. Mesh Wi-Fi networks use a central base station that wirelessly communicates with smaller satellite access points to deliver wireless networking over a larger area than a single access point can cover. A TP-Link Wi-Fi router that can serve as a base station for a Wi-Fi network The use of the term "base station" to describe any Wi-Fi router or access point is no longer as common as it once was. Most home networking equipment instead uses "access point" to refer to wireless bridges and "Wi-Fi router" to refer to access points with routing features. "Wi-Fi base station" is now generally used to refer to large-scale, enterprise-level wireless networking infrastructure designed to provide coverage for workplaces and public spaces. Updated September 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/base_station Copy

Baseband

Baseband Baseband refers to the original frequency range of a transmission signal before it is converted, or modulated, to a different frequency range. For example, an audio signal may have a baseband range from 20 to 20,000 hertz. When it is transmitted on a radio frequency (RF), it is modulated to a much higher, inaudible, frequency range. Signal modulation is used for radio broadcasts, as well as several types of telecommunications, including cell phone conversations and satellite transmissions. Therefore, most telecommunication protocols require original baseband signals to be modulated to a higher frequency before they are transmitted. These signals are then demodulated at the destination, so the recipient receives the original baseband signal. Dial-up modems are a good example of this process, since they modulate and demodulate signals when they are transmitted and received. In fact, the word "modem" is short for modulator/demodulator. While most protocols require the modulation of baseband signals, some can transmit in baseband without any signal conversion. A common example is the Ethernet protocol, which transfers data using the original baseband signal. In fact, the word "BASE" in "10BASE-T," "100BASE-T," and "1000BASE-T" Ethernet refers to baseband transmission. These Ethernet protocols do not require signal modulation. However, unlike broadband networks, baseband Ethernet networks are limited to single transmission channel. Updated December 17, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/baseband Copy

Bash

Bash Bash, though typically not capitalized, is an acronym for "Bourne-Again Shell" and is named after Stephen Bourne, the creator of the Unix shell "sh." It is a command language interpreter derived from sh that can execute commands entered at a command prompt and process text file input. Bash (bash) supports all the commands of the original Bourne shell (sh), as well as many others. It also includes features from the Korn shell (ksh) and C shell (csh), such as command line editing, command substitution syntax, and command history. Bash also supports "brace expansion," which is used to generate related text strings. This operation provides an efficient way to search for filenames and rename multiple files. Newer versions of Bash support regular expressions (Bash 3.0) and associative arrays (Bash 4.0). Bash was originally developed by Brian Fox for the GNU Project and was released in 1989. The bash shell was initially distributed with the GNU operating system and later became the default shell for multiple Linux distributions and Mac OS X. Recent versions of Bash (versions 3 and 4) were developed by Chet Ramey and are currently published by the Free Software Foundation, the same organization that distributes the GNU operating system. Updated August 15, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bash Copy

BASIC

BASIC Stands for "Beginner's All-purpose Symbolic Instruction Code." BASIC is a high-level computer programming language developed at Dartmouth University in the 1960s. It was designed to be easy to learn and accessible to students, allowing them to write their own programs. Its simplicity made it one of the first widely-used programming languages, particularly during the early days of home computing, but it is less common now following the introduction of other languages that are more capable but still easy to learn. BASIC uses simple and straightforward syntax that makes it easy for beginners to understand. In early versions of BASIC, every line was numbered to tell the computer how to process the instructions. Line numbers would often follow the pattern 10, 20, 30, etc, to allow you to insert new instructions later by giving them intermediary numbers like 25. BASIC also used a limited number of commands that were easy to understand. For example, the GOTO command told the program to jump to a specific line number, the PRINT command displayed text on the screen, and the INPUT command prompted the user for text input. IF...THEN commands allowed you to create simple conditional statements, and FOR...NEXT commands created simple loops. A BASIC program and its output Dialects of BASIC were available for most home microcomputers in the late 1970s. A version of BASIC for the Altair in 1975 was the first product released by Microsoft; versions were also available for the Apple II and the TRS-80. Computer hobbyist magazines would even include source code for BASIC programs that you could type into your computer and run. Eventually, commercially released software (written in other languages like C and C++) made it unnecessary for computer users to write their own programs. Updated November 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/basic Copy

Batch File

Batch File A batch file is a kind of script file containing a list of programmatic commands and instructions for Windows-based computers. When a user runs a batch file, it automatically executes each command in the script in sequence. Windows users and system administrators often use batch files to automate repetitive tasks instead of performing each step manually, like backing up a series of folders or renaming a large set of files. When you run a batch file, a command interpreter program reads it line by line, carrying out each command in order. Batch files come in two formats. Older DOS-based versions of Windows, like Windows 95 and 98, use batch files with the .BAT file extension through the COMMAND.COM command interpreter. NT-based versions of Windows, like Windows 2000 and all newer versions, use batch files with the .CMD extension through CMD.EXE. Both BAT and CMD files are plain text files you can create in any text editor like Notepad. A simple Windows 11 batch file running in the CMD.EXE command interpreter Early versions of Windows automatically loaded a special batch file, AUTOEXEC.BAT, at system boot. This batch file automatically configured system settings and loaded device drivers. System administrators and power users could edit AUTOEXEC.BAT to customize Windows' behavior — for example, automatically mapping network drives on startup. However, inexperienced users could cause significant problems if they made the wrong changes, possibly preventing Windows from even booting. While batch files are unique to Windows computers, Unix-based operating systems (including Linux and macOS) use similar files called shell scripts. Both batch files and shell scripts can contain commands that modify important system settings or open up security holes, so you should be careful and never run unknown batch files you receive from untrusted sources. File extensions: .BAT, .CMD Updated October 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/batchfile Copy

Batch Process

Batch Process A batch process is a computer process that performs a pre-determined series of commands on a group, or "batch," of data inputs. Batch processing data saves significant time by automating repetitive tasks consistently and repeatably. Once started, a batch process requires little or no input from the user. You can save commands to a script or batch file to run later, or enter them directly into an application or command line to run it immediately. These commands typically do not contain direct links to the data or files it processes; instead, you can use wildcard characters to set what kind of file or data you want to process within the batch. For example, a photographer can create a batch file that takes an entire folder of photographs as input, then creates copies in a new folder that have been resized to a specific resolution and renamed with the date and time they were taken. All they need to do is tell the application or operating system running the batch process which folder to use, and the rest happens automatically. A macOS batch process converting a folder of JPEG images to PNG Batch processes, also known as jobs, may be scheduled to run one time or recur on a set schedule or interval. For example, a system administrator may write a batch file that backs up a specific local folder to cloud storage and then schedule that batch process to run every night. Unix and Linux include several task scheduling utilities — at schedules a job one time, while cron creates recurring tasks. Windows includes a Task Scheduler application that can run jobs at specific times or at regular intervals. In addition to cron and at, macOS supports job scheduling using the launchd utility; it also supports Quick Actions and Folder Actions that allow users to perform simple batch operations on selected files and folders. Updated November 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/batchprocess Copy

Baud

Baud Baud, or baud rate, is used to describe the maximum oscillation rate of an electronic signal. For example, if a signal changes (or could change) 1200 times in one second, it would be measured at 1200 baud. While the term was originally used to measure the rate of electronic pulses, it has also become a way to measure data transmission speeds of dial-up modems. If a modem transfers a single bit per electronic pulse, one baud would be equal to one bit per second (bps). However, most modems transfer multiple bits per signal transition. For example, a 28.8 Kbps modem may send nine bits per second. Therefore, it would only require a baud rate of 2100 (28,800 / 9). 56K modems often use a baud rate of 8000. This means they send seven bits per signal transition, since 7 x 8000 = 56,000. Modems typically select the most efficient baud rate automatically (sometimes called the "autobaud" setting). However, many dial-up modems allow you to override the default baud setting and manually enter the baud rate using a software interface. This can be useful if a dial-up ISP requires a specific baud rate for communication. However, reducing the baud rate of a modem may also reduce the maximum data transmission rate. Baud rates above 8000 are not very reliable on analog telephone lines, which is why most modems do not offer higher baud settings. Additionally, at such a high baud rate, only seven bits can be sent consistently with each pulse, which is why dial-up modem speeds maxed out at 56 Kbps many years ago. Fortunately, newer technologies such as DSL and cable modems offer much faster data transfer rates. Since DSL and cable modems communicate over digital lines, baud rate is irrelevant to these devices. Updated March 29, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/baud Copy

Bcc

Bcc Stands for "Blind Carbon Copy." Bcc is an email field that allows you to "blind" copy one or more recipients. It is similar to the Cc field, but email addresses listed in the Bcc field are hidden from all recipients. When you Bcc someone, that person receives a single copy of the email. He or she will not receive any replies to the message. Therefore, blind carbon copying is a useful way to share an email with one or more users who don't need to follow the email thread. It is good netiquette to use the Bcc field when sending an email to a large group of people. Hiding the recipient list protects users' privacy and prevents the email addresses from being shared unintentionally. However, if recipients should know who else received the message, it is best to use the Cc field. To vs. Cc vs. Bcc Generally, you should use the To field for the primary recipient(s) — the people to whom the message is addressed. Secondary recipients who should follow the email thread belong in the Cc field. If you want to send a copy of the email to others for reference, the Bcc field makes sense. NOTE: Bcc'd recipients can still see the email addresses listed in the To and Cc fields. Updated February 4, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bcc Copy

Benchmark

Benchmark A benchmark is a series of tests that measure a computer's performance relative to other computers. A benchmark test runs a consistent set of tasks every time and presents the results as a numerical score, helping IT workers and power users compare the performance of one computer to another. Benchmark suites can test a computer's overall performance or focus specifically on hardware components like the CPU, GPU, and storage disks. Benchmarks help computer users evaluate hardware through actual performance instead of comparing technical specifications. For example, instead of comparing two CPUs by looking at clock speed and core count, you can compare benchmark scores to see how well each performs at actual processing tasks. One popular benchmark, Geekbench, automates more than a dozen tasks in its CPU test, including file compression, HTML rendering, and image processing algorithms. It combines the results of each test into a final numeric score that reflects a CPU's overall performance. You can compare the total scores for different CPUs to get an idea of relative performance, with higher scores indicating a faster processor. Other popular benchmarking suites include PCMark and Novabench. Geekbench for macOS offers CPU and GPU benchmarking tests Benchmarking applications come in two varieties: real-world benchmarks and synthetic benchmarks. A real-world benchmark automates a series of tasks in actual applications, simulating how the computer performs in everyday use. A synthetic benchmark instead runs specially written calculations and functions that test the limits of a computer's performance, but may not reflect how most people use their devices. While real-world tests can help provide insight into how a specific system configuration meets a person's needs, synthetic benchmarks are more helpful for comparing computer hardware side-by-side. Updated August 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/benchmark Copy

Bespoke

Bespoke The term "bespoke" comes from England where it originally referred to custom or tailor-made clothing. In recent years, however, the term has been applied to information technology (IT), and refers to custom services or products. For example, bespoke software is software customized for a specific purpose. Bespoke programs may include custom accounting software for a certain company or a network monitoring tool for a specific network. Because bespoke software is custom-made for a specific purpose, bespoke programs are also considered vertical market software. Another area where bespoke is used in the computer industry is in reference to websites. A bespoke website is one that is custom-built, often from scratch, to suit the needs of a business or organization. This may include a custom layout, custom database integration, and other extra features the client may require. Because bespoke websites must be individually tailored to a client's needs, they often take longer to develop and are more expensive than websites built from templates. Finally, bespoke can also be used to refer to hardware. Computer companies, such as Dell, HP, and Apple may provide customers with custom options for the systems they buy. For example, one person may choose to build his system with a high-end graphics card for video production, while another person may choose a basic graphics card, but may add additional RAM so her computer will be able to run several programs at once. These custom configurations are sometimes referred to as bespoke systems. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bespoke Copy

Beta Software

Beta Software Beta software refers to computer software that is undergoing testing and has not yet been officially released. The beta phase follows the alpha phase, but precedes the final version. Some beta software is only made available to a select number of users, while other beta programs are released to the general public. Software developers release beta versions of software in order to garner useful feedback before releasing the final version of a program. They often provide web forums that allow beta testers to post their feedback and discuss their experience using the software. Some beta software programs even have a built-in feedback feature that allows users to submit feature requests or bugs directly to the developer. In most cases, a software developer will release multiple "beta" versions of a program during the beta phase. Each version includes updates and bug fixes that have been made in response to user feedback. The beta phase may last anywhere from a few weeks for a small program to several months for a large program. Each beta version is typically labeled with the final version number followed by a beta version identifier. For example, the fifth beta release of the second version of a software program may have the version number "2.0b5." If a developer prefers not to list the specific version of a beta program, the version number may simply have the term "(beta)" after the program name, e.g. "My New App (beta)." This naming convention is commonly used for beta versions of websites or web applications. Since beta software is a pre-release version of the final application, it may be unstable or lack features that will be be included in the final release. Therefore, beta software often comes with a disclaimer that testers should use the software at their own risk. If you choose to beta test a program, be aware that it may not function as expected. Updated April 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/beta_software Copy

Bezel

Bezel The term "bezel" comes from the jewelry industry, in which case a bezel is a groove that holds a gemstone or watch crystal in place. The term is also used to describe the rim around gauges, such as the speedometer in a car. In the computer industry, a bezel may refer to either the edge around a monitor or the front of a desktop computer case. A monitor bezel, or screen bezel, is the area of a display that surrounds the screen. For example, if a monitor has a one inch bezel, the screen is surrounded by one inch of plastic or metal. If a monitor's bezel width is different on the sides than the top and bottom of the screen, the bezel specification will include individual measurements for each side. As displays have evolved, bezel widths have generally gotten smaller. For example, old CRT monitors often had bezel widths of two inches or more, while modern LCD displays often have bezels that are less than one inch thick. Thinner bezels help maximize the screen real estate of a laptops and make multiple desktop displays look more like a single screen when placed side by side. A computer bezel is the front face of a system unit or "tower." Most PC bezels include have openings for one or more drive bays. These slots allow you to add devices such as an optical drive or an additional internal hard drive. When extra drives are not installed, these bays are usually covered by plates that are the same color as the bezel, but are not technically part of the bezel. Updated March 28, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bezel Copy

Bézier Curve

Bézier Curve A Bézier (pronounced "bez-E-A") curve is a line or "path" used to create vector graphics. It consists of two or more control points, which define the size and shape of the line. The first and last points mark the beginning and end of the path, while the intermediate points define the path's curvature. Bézier curves are used to create smooth curved lines, which are common in vector graphics. Since they are defined by control points, Bézier curves can be resized without losing their smooth appearance. Raster graphics, on the other hand, define each pixel within an image and appear blocky or pixleated when enlarged. There are several types of Bézier curves, including linear, quadratic, and higher-order curves. A linear curve is a straight line defined by two points. A quadratic curve includes intermediate points that pull the control points, and therefore the path, in different directions. A higher-order curve may include additional intermediate points that fine tune how the path follows each control point. The shape of a Bézier curve is calculated using interpolation, a method of approximating the path of the line between each control point. Since computer screens display graphics using pixels, Bézier curves are always approximated when displayed on a screen. If you zoom in on a Bézier curve using a drawing program or CAD software, you will see a more accurate representation of the path. NOTE: Bézier curves are named after French engineer Pierre Bézier, which is why the "B" is always capitalized. Updated April 3, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bezier_curve Copy

BGP

BGP Stands for "Border Gateway Protocol." BGP is a protocol used for routing data transmissions over the Internet. It helps determine the most efficient path, whether sending data down the street or across the globe. BGP manages the flow of data similar to how the post office handles the delivery of physical mail. For example, when a user in Los Angeles accesses a website in New York, BGP first routes the data from the New York-based server to the Internet backbone. The data travels through large fiber optic cables across state lines, then through smaller networks until it arrives at the user's ISP in Los Angeles. The ISP then routes the data locally to the user's device. Each router along the data transmission route is called a "hop." You can view the hops between your location and a server using a traceroute command, such as the ones below: Unix - traceroute techterms.com DOS - tracert techterms.com Fundamental to the Border Gateway Protocol are routing tables, which are plain text files that list available routers by IP address. BGP routers use these tables to determine the best path for each data transmission. Routing tables are frequently updated since the fastest route can change based on network traffic and outages. High-traffic routers may request updates once a minute or even every few seconds. Benefits of BGP A standard means of communication between routers Efficient routing of data over the Internet Adaptability for network congestion Redundancy when network outages occur Scalability to add or remove routers Autonomous operation NOTE: While BGP enables efficient routing of data across long distances, CDNs provide an even greater benefit by hosting data at "edge nodes" around the world. Updated March 18, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bgp Copy

Big Data

Big Data The phrase "big data" is often used in enterprise settings to describe large amounts of data. It does not refer to a specific amount of data, but rather describes a dataset that cannot be stored or processed using traditional database software. Examples of big data include the Google search index, the database of Facebook user profiles, and Amazon.com's product list. These collections of data (or "datasets") are so large that the data cannot be stored in a typical database, or even a single computer. Instead, the data must be stored and processed using a highly scalable database management system. Big data is often distributed across multiple storage devices, sometimes in several different locations. Many traditional database management systems have limits to how much data they can store. For example, an Access 2010 database can only contain two gigabytes of data, which makes it infeasible to store several petabytes or exabytes of data. Even if a DBMS can store large amounts of data, it may operate inefficiently if too many tables or records are created, which can lead to slow performance. Big data solutions solve these problems by providing highly responsive and scalable storage systems. There are several different types of big data software solutions, including data storage platforms and data analytics programs. Some of the most common big data software products include Apache Hadoop, IBM's Big Data Platform, Oracle NoSQL Database, Microsoft HDInsight, and EMC Pivotal One. Updated August 27, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/big_data Copy

Big Sur

Big Sur Big Sur (macOS 11) is the 17th version of Apple's macOS (formerly Mac OS X) operating system. It followed macOS Catalina and was released on November 12, 2020. macOS 11 Big Sur includes major user interface updates. The overall appearance of the operating system is lighter, more colorful, and more spacious. The application and Finder windows have a full-height sidebar, the Dock has a new "floating" look, and the menu bar is more translucent. Most app icons have been completely redesigned. Big Sur is the first version of macOS to include Control Center, which consolidates common settings, such as Wi-Fi, Bluetooth, display, and sound options into a single area. It also includes an updated version of Notification Center, which supports interactive notifications, grouping notifications, and custom widgets, such as Calendar, Weather, and Notes. Big Sur maintains most of the utilities bundled with previous versions of macOS, but removes the Network Utility app. Apple made significant updates to several bundled applications in Big Sur, including Safari, Mail, Messages, Music, and Maps. Many updates are designed to provide a more seamless user experience between iOS and macOS. For example, Messages in Big Sur supports pinned conversations, "mentions" of individual users, inline replies, animated GIFs, and message effects. NOTE: Unlike the previous 16 versions of OS X or macOS, which carried the "Mac OS 10" name, Big Sur is "macOS 11." In Big Sur, minor point updates now use the standard "minor point number" in the version, such as 11.1, 11.2, etc. Previous versions of macOS used the "patch number" for minor point updates, such as (OS X Mavericks 10.9.1, 10.9.2, etc. Updated May 21, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/big_sur Copy

Binary

Binary Binary (or base-2) a numeric system that only uses two digits — 0 and 1. Computers operate in binary, meaning they store data and perform calculations using only zeros and ones. A single binary digit can only represent True (1) or False (0) in boolean logic. However, multiple binary digits can be used to represent large numbers and perform complex functions. In fact, any integer can be represented in binary. Below is a list of several decimal (or "base-10") numbers represented in binary. DecimalBinaryBase-2 Calculation 00n/a 1120 21021 31121 + 20 410022 510122+ 20 611022 + 21 711122 + 21 + 20 8100023 9100123 + 20 10101023 + 21 64100000026 25610000000028 102410000000000210 One bit contains a single binary value — either a 0 or a 1. A byte contains eight bits, which means it can have 256 (28) different values. These values may be used to represent different characters in a text document, the RGB values of a pixel within an image file, or many other types of data. Large files may contain several thousand bytes (or several megabytes) of binary data. A large application may take up thousands of megabytes of data. No matter how big a file or program is, at its most basic level, it is simply a collection of binary digits that can be read by a computer processor. NOTE: The term "binary" may also be used to describe a compiled software program. Once a program has been compiled, it contains binary data called "machine code" that can be executed by a computer's CPU. In this case, "binary" is used in contrast to the text-based source code files that were used to build the application. Updated March 22, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/binary Copy

Bing

Bing Bing is a search engine developed by Microsoft. It provides a standard web search, as well as specialized searches for images, videos, shopping, news, maps, and other categories. Bing originated from MSN Search, which later became Windows Live Search (or simply "Live Search"). Microsoft renamed the search engine to "Bing" in June of 2009. The name change also reflected Microsoft's new direction with the search engine, which the company branded as a "decision engine." Bing is designed to help users make important decisions faster. This focus is summarized in the search engine's slogan, "Bing and decide." Bing differentiates itself from other search engines like Google and Yahoo in several different ways. For instance, Bing's home page includes a background image or video that is updated every day. You can click on different areas of the background to learn more about the daily topic. Bing also uses Microsoft's proprietary search algorithm to provide relevant search results. These results may include "instant answers," which provide helpful information at the top of the search results page for specific types of queries. The search results may also include images, videos, shopping information, or news stories that are relevant to the keywords you entered. You can choose to sign into Bing using your Windows Live ID or your Facebook login. When you are logged into Facebook, Bing displays which of your friends "Like" specific webpages in the search results. If you want even more Facebook integration, you can download the "Bing Bar," which provides News Feed updates directly from the toolbar in Internet Explorer. The Bing Bar also provides one-click access to weather, maps, videos, stock quotes, and other information. Try Bing for yourself. Updated February 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bing Copy

Biometrics

Biometrics Biometrics is a category of technologies that can detect distinctive human physical characteristics to identify a specific person. Biometric sensors can be used for authentication instead of, or in addition to, other methods like a username and password. The most common biometric authentication tools are fingerprint sensors and facial recognition camera arrays. Modern smartphones, tablets, and laptop computers often include at least one type of biometric authentication. These sensors allow you to use a biometric input to unlock the device quickly and skip entering a password or passcode. For example, FaceID lets you unlock your iPhone with a quick face scan, while some Android smartphones integrate a fingerprint sensor into the display to unlock with a touch. Other forms of identification that are less common include retina/iris scanning and voice recognition, and future biometric sensors may use other factors like ear shape, body odor, or the blood veins in a person's palms. An icon that appears on an iPhone after a successful FaceID scan Multi-factor authentication systems use biometric data as one possible factor for increased security. For example, requiring a facial scan in addition to a username and password is more secure than requiring only one method or the other because it relies on both inherent physical factors and specific knowledge. Passkeys can also use biometric authentication to log you into websites without entering a password at all, instead relying on the device having previously authenticated its user. There are some significant privacy concerns regarding the use of biometric data. First, some forms of biometric authentication can unlock a device without a person's consent or knowledge — for example, unlocking an unconscious person's smartphone by placing their finger on a fingerprint sensor. Some early facial recognition algorithms used a standard webcam for authentication and could be tricked with a photograph. Finally, passwords can be changed when necessary, while biometric information like a face or fingerprint cannot. Updated June 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/biometrics Copy

BIOS

BIOS Stands for "Basic Input/Output System." A computer's BIOS is a firmware program that controls a computer's boot sequence. It is stored on a ROM chip on the computer's motherboard and provides a user interface for several advanced hardware configuration settings. The BIOS is loaded automatically when a computer is first powered on. It runs the computer's boot sequence by scanning for installed hardware components, running the POST checks, locating the designated boot disk, then loading the operating system into memory. After the operating system loads, the BIOS serves as an intermediary between the CPU and I/O devices, keeping track of their hardware addresses for the operating system. A typical BIOS menu screen You can access the BIOS interface during startup by pressing a designated keyboard key — generally DELETE or F2 — during the POST check or while the manufacturer's logo appears on the screen. Once loaded, the BIOS interface shows system information like the CPU type and clock speed, total system RAM, and installed storage devices. It also provides several menus of customizable settings, allowing you to choose which drive serves at the boot disk, change system power settings, and toggle hardware components like onboard audio and network connections on or off. NOTE: BIOS firmware was standard on PCs throughout the 1980s, 1990s, and 2000s until manufacturers started replacing it with a successor, the Unified Extensible Firmware Interface (UEFI), in the 2010s. Updated April 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bios Copy

Biotech

Biotech Biotech, or "biotechnology," is technology based on biology, the study of living organisms. It covers all technologies related to biological systems, including humans, animals, and the environment. The primary goal of biotechnology is to improve the health of people and the planet. Examples include: treating and curing diseases formulating optimal nutrition plans reducing waste through recycling creating renewable energy sources generating higher crop yields While not all scientific research requires computer technology, most modern biotech processes involve computers. For example, scientists often use electronic instruments to perform measurements, which are sent (via a cable or wirelessly) to one or more computer systems. Researchers use computers to record data, compare results over time, and run simulations. Advancements in processor technology provide corresponding improvements in biotech research. For instance, complex operations like DNA sequencing and chemical compound modeling no longer require specialized computer systems or computing clusters. Simulations that may have taken weeks to run on older systems may now complete in a few minutes. Biotech vs Medtech Medtech (medical technology) is a subset of biotech. While biotech covers all facets of biology, medtech is specific to the medical field. Medical technologies such as vaccines and digital x-rays also fall within the biotech umbrella. Biotechnologies, such as plastic recycling and ethanol enrichment, are not considered medtech. Updated August 7, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/biotech Copy

Bit

Bit A bit (short for "binary digit") is the smallest unit of measurement used to quantify computer data. It contains a single binary value of 0 or 1. While a single bit can define a boolean value of True (1) or False (0), an individual bit has little other use. Therefore, in computer storage, bits are often grouped together in 8-bit clusters called bytes. Since a byte contains eight bits that each have two possible values, a single byte may have 28 or 256 different values. The terms "bits" and "bytes" are often confused and are even used interchangeably since they sound similar and are both abbreviated with the letter "B." However, when written correctly, bits are abbreviated with a lowercase "b," while bytes are abbreviated with a capital "B." It is important not to confuse these two terms, since any measurement in bytes contains eight times as many bits. For example, a small text file that is 4 KB in size contains 4,000 bytes, or 32,000 bits. Generally, files, storage devices, and storage capacity are measured in bytes, while data transfer rates are measured in bits. For instance, an SSD may have a storage capacity of 240 GB, while a download may transfer at 10 Mbps. Additionally, bits are also used to describe processor architecture, such as a 32-bit or 64-bit processor. Updated April 20, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bit Copy

Bitcoin

Bitcoin Bitcoin is a digital currency that was introduced in 2009. There is no physical version of the currency, so all Bitcoin transactions take place over the Internet. Unlike traditional currencies, Bitcoin is decentralized, meaning it is not controlled by a single bank or government. Instead, Bitcoin uses a peer-to-peer (P2P) payment network made up of users with Bitcoin accounts. Bitcoins can be acquired using two different methods: 1) exchanging other currencies for bitcoins, and 2) bitcoin mining. The first method is by far the most common and can be done using a Bitcoin exchange like Mt. Gox or CampBX. These exchanges allow users to trade dollars, euros, or other currencies for bitcoins. The other method, bitcoin mining, involves setting up a computer system to solve math problems generated by the Bitcoin network. As a bitcoin miner solves these complex problems, bitcoins are credited to the miner. While this seems like an easy way to earn bitcoins, the Bitcoin network is designed to generate increasingly more difficult math problems, which ensures new bitcoins will be generated at a consistent rate. Additionally, the Bitcoin protocol and software are open source to make sure the network isn't controlled by a single person or entity. When you obtain bitcoins, your balance is stored in a secure “wallet” that is encrypted using password protection. When you perform a bitcoin transaction, the ownership of the bitcoins is updated in the network and the balance in your wallet is updated accordingly. Bitcoin transactions are verified by the bitcoin mining systems connected to the network, so there is no need for a central bank to authorize transactions. Since bitcoins are transferred directly from person to person, the transaction fees are small (around $0.01 per transaction). Additionally, there are no prerequisites for creating a Bitcoin account and no transaction limits. Bitcoins can be used around the world, but the currency is only good for purchasing items from vendors that accept Bitcoin. Updated December 20, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bitcoin Copy

Bitmap

Bitmap A bitmap (or raster graphic) is a digital image composed of a matrix of dots. When viewed at 100%, each dot corresponds to an individual pixel on a display. In a standard bitmap image, each dot can be assigned a different color. Together, these dots can be used to represent any type of rectangular picture. There are several different bitmap file formats. The standard, uncompressed bitmap format is also known as the "BMP" format or the device independent bitmap (DIB) format. It includes a header, which defines the size of the image and the number of colors the image may contain, and a list of pixels with their corresponding colors. This simple, universal image format can be recognized on nearly all platforms, but is not very efficient, especially for large images. Other bitmap image formats, such as JPEG, GIF, and PNG, incorporate compression algorithms to reduce file size. Each format uses a different type of compression, but they all represent an image as a grid of pixels. Compressed bitmaps are significantly smaller than uncompressed BMP files and can be downloaded more quickly. Therefore, most images you see on the web are compressed bitmaps. If you zoom into a bitmap image, regardless of the file format, it will look blocky because each dot will take up more than one pixel. Therefore, bitmap images will appear blurry if they are enlarged. Vector graphics, on the other hand, are composed of paths instead of dots, and can be scaled without reducing the quality of the image. File extensions: .BMP, .DIB Updated February 6, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bitmap Copy

Bitrate

Bitrate Bitrate is the measurement of the speed at which data is transferred or processed. It may refer to the transfer speed of a network or data connection, or to the data required per second to store and play an encoded media file. In both cases, bitrate is measured in bits per second (bps), kilobits per second (Kbps), megabits per second (Mbps), or gigabits per second (Gbps). Data Transfer A computer network or data connection's bitrate measures the speed at which data moves from one location to another. A connection's maximum possible bitrate is known as its bandwidth. For example, a cable modem connection may let you download a file at 250 Mbps; when that finishes, you could transfer that file to an external disk over a USB connection at 600 Mbps. Data transfer rate measurements use bits per second, not bytes per second. Eight bits make up one byte, so 1 megabyte/second is 8 megabits/second. Media Encoding An audio or video file's bitrate measures how much data a file needs per second of playback. The bitrate (along with the codec used to encode it) is a standard benchmark that measures a file's quality. A higher bitrate requires less media compression, resulting in fewer compression artifacts and better dynamic range. For example, an MP3 file encoded at a bitrate of 256 Kbps will sound clearer than one encoded at 96 Kbps, since it includes more data per sample. A high-definition MP4 video encoded at 18 Mbps will look better, with fewer artifacts and more accurate colors, than one encoded at 6 Mbps. A high-definition video and its bitrate A file's bitrate is particularly important for streaming media since it determines how much network bandwidth is necessary for uninterrupted playback. Streaming media services often store several versions of a file, encoded at different bitrates. This allows them to serve higher-bitrate files to devices with faster connections and send files compressed at lower bitrates when less bandwidth is available. Updated April 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bitrate Copy

BitTorrent

BitTorrent BitTorrent is an open-source peer-to-peer (P2P) file-sharing protocol. It decentralizes the distribution of files by making each client that downloads a file also act as a server other clients can download from. It even allows the original file host to drop offline without affecting the availability of that file as long as at least one other computer has a full copy of it. BitTorrent helps distribute files without the use of a dedicated file server. Instead, people can create and download torrent files containing information and metadata about shared files. A torrent file also includes an address for a tracker — a server that coordinates file transfers by tracking the IP addresses for every computer downloading and sharing that torrent. Once connected to the torrent's swarm, the client can begin downloading from seeds and sharing with peers. BitTorrent allows fast file downloads amongst a swarm of computers while reducing the load on a single server Traditional file servers transfer files to clients progressively, starting at the beginning of a file and sending its contents in order. BitTorrent instead splits files into small, uniformly-sized pieces and transfers them out of sequential order, prioritizing the rarest pieces that the fewest peers have. Once the client finishes downloading each piece, it begins uploading it to other peers and moves on to another one. This way, a client can add redundancy to the network and reduce the overall demand on the seeds. The BitTorrent protocol has several benefits over traditional file servers, making it ideal for specific uses. Many free and open-source software communities use BitTorrent to distribute their software for free to reduce server costs. It also helps quickly distribute large files that see immediate high demand without overloading a single file server. For example, Blizzard used a custom BitTorrent client to release game updates for World of Warcraft, allowing gamers to get the update quickly without the risk of a file server crash. However, the most popular use of BitTorrent is the distribution of pirated movies, TV shows, music, and software. Updated June 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bittorrent Copy

Black Box Testing

Black Box Testing Black box testing is a type of software testing in which the application design and source code is not known to the tester. It allows developers to receive external feedback, such as feature requests and bugs that may have been overlooked by the development team. While white box testing is useful for stress-testing apps, black box testing is useful for improving the user experience. For this reason, a software company may provide a beta version to a group of outside beta testers. These users, who may be using the app for the first time, can provide objective feedback about the user interface and overall functionality. Black box testers may also uncover bugs that the original programmers did not find. A large number of beta testers can discover errors and incompatibilities more quickly than a small group of developers. Therefore, large software companies often release beta versions of their apps before releasing them. Testers can submit bug reports and other feedback during the beta stage of development, which may last few months. Many software programs go through both white box and black box testing. White box testing is more common in the alpha stage, while black box testing is usually performed in the beta stage. Running both types of tests at the same time is called "grey box testing." Updated September 4, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/black_box_testing Copy

Blacklist

Blacklist A blacklist is list of items, such as usernames or IP addresses, that are denied access to a certain system or protocol. When a blacklist is used for access control, all entities are allowed access, except those listed in the blacklist. The opposite of a blacklist is a whitelist, which denies access to all items, except those included in the list. Blacklists have several applications in computing: Web servers often include a blacklist that denies access from specific IP addresses or ranges of IPs, for security purposes. Firewalls may use a blacklist to deny access to individual users, systems located in certain regions, or computers with IPs within a certain subnet mask. Spam filters often include blacklists that reject certain e-mail addresses and specific message content. Programmers may implement blacklists within programs to prevent certain objects from being modified. Since blacklists deny access to specific entities, they are best used when a limited number of items need to be denied access. When most entities need to be denied access, a whitelist approach is more efficient. Updated September 28, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/blacklist Copy

BLE

BLE Stands for "Bluetooth Low Energy." BLE (also "Bluetooth LE") is a variation of the Bluetooth wireless standard designed for low power consumption. It was introduced by the Bluetooth Special Interest Group (Bluetooth SIG) in December 2009 as part of the Bluetooth 4.0 specification. Some Bluetooth applications, such as audio streaming and data transfers, require a strong consistent signal and large bandwidth. Other applications do not need a strong signal and require less electrical power. BLE is was developed for these types of applications. Examples include: wearable devices, such as fitness trackers smart home appliances proximity sensors A proximity sensor, such as a Google Beacon or Apple iBeacon, only needs to transmit a small signal every second or so to detect nearby Bluetooth devices. This requires significantly less power than streaming high-fidelity stereo audio to a Bluetooth speaker or headset. Developers can use BLE-specific communication in their apps to significantly reduce the electrical power used by a smartphone, smartwatch, or other device. BLE Advantages The primary advantage of BLE compared to "Classic Bluetooth" is lower power consumption, which translates to longer battery life. For example, BLE can run for multiple years on a single coin-cell battery, such as a CR2032 battery used in wristwatches. Bluetooth LE can also communicate over a longer range than the 100-meter distance Classic Bluetooth supports. It also supports fast connection setup with a latency of 3 ms compared to the 100 ms latency of Classic Bluetooth. NOTE: BLE is supported by all Bluetooth 4.0 devices as long as the device software also supports it. Updated March 8, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ble Copy

Bloatware

Bloatware Bloatware is software that uses an excessive amount of system resources, such as disk space and memory. While bloatware may refer to the first version of a software program, it most often describes programs that require increasing amounts of system resources with each new version. It is generally considered good programming practice to develop efficient applications. A well-developed program should not require more RAM or disk space than necessary. However, in some cases a developer may prioritize a specific release date or new features over efficiency. This can result in a bloated application that uses far more storage space and memory than the previous version. New versions of software programs typically include new features, which provide an incentive for users to upgrade. However, with each new release, the program's system requirements may also grow. If left unchecked, after a few versions, a once efficient application can turn into bloatware. To avoid this, some software developers make an intentional effort to trim unnecessary features when adding new ones. Common examples of software that may be considered bloatware include Microsoft Windows, Apple iTunes, and Adobe Reader. Updated July 12, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bloatware Copy

BLOB

BLOB Stands for "Binary Large Object." A BLOB is a database data type that can store large chunks of binary data. Databases that support BLOBs use them to store entire distinct files within the database. Since they store binary data directly, BLOBs are unlike most other database data types that can only store data as plain text, boolean values, or numbers. BLOBs allow a database to attach binary data directly to a database record. For example, a database of a website's users may use the BLOB data type to store each user's uploaded profile image. While the most common use of BLOBs is to store media files, they may contain any file type — a BLOB might consist of a PDF document, a ZIP archive, or even an executable file. However, since they consist of unstructured binary data and not text, database software cannot search their contents; instead, a database can include additional fields containing searchable metadata for the BLOB. MySQL Workbench showing the contents of a BLOB as text, indicating that it contains a PNG image Storing files as BLOBs in a database also significantly increases the size of the database. Binary BLOBs require much more storage space than plain text and numbers, and storing large BLOBs can slow down database queries. Instead of storing BLOBs in a database directly, many administrators set aside separate cloud storage for media files and insert links to those files into database records. This makes sharing, updating, and editing files easier (since the changes do not need to go through the database) but adds complexity by requiring separate backups and permissions management. NOTE: Some types of database include multiple BLOB data types to help manage storage space. For example, MySQL supports four types — TINYBLOB for files under 255 bytes, BLOB for files up to 64 kilobytes, MEDIUMBLOB for files up to 16 megabytes, and LONGBLOB for files up to 4 gigabytes. Updated August 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/blob Copy

Block-Based Coding

Block-Based Coding Block-based coding is a type of programming that uses a visual drag-and-drop interface instead of a source code editor. By connecting various blocks, novice developers can write programs without knowing a programming language. Individual blocks within a block-based coding project represent commands, such as move and turn, and clauses, such as when, while, and if then statements. A string of blocks can be attached to one or more objects, such as an image or sprite. For example, a basic program may display a cat sprite that moves the cat to the center of the canvas and plays a "meow" sound when clicked. The program would require the following three blocks, attached to the cat object. When clicked (clause) Move to center (command) Play sound "meow" (command) More complex programs include multiple sprites that interact with each other. With enough blocks, it is possible to create detailed animations or interactive games. Block-based coding is primarily used as a learning tool for beginning programmers. It has a lower barrier to entry and provides an accessible introduction to software development. For many students, block-based coding is a simple, visual way to learn programming concepts. Updated November 5, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/block-based_coding Copy

Blockchain

Blockchain A blockchain is a digital record of transactions. The name comes from its structure, in which individual records, called blocks, are linked together in single list, called a chain. Blockchains are used for recording transactions made with cryptocurrencies, such as Bitcoin, and have many other applications. Each transaction added to a blockchain is validated by multiple computers on the Internet. These systems, which are configured to monitor specific types of blockchain transactions, form a peer-to-peer network. They work together to ensure each transaction is valid before it is added to the blockchain. This decentralized network of computers ensures a single system cannot add invalid blocks to the chain. When a new block is added to a blockchain, it is linked to the previous block using a cryptographic hash generated from the contents of the previous block. This ensures the chain is never broken and that each block is permanently recorded. It is also intentionally difficult to alter past transactions in blockchain since all the subsequent blocks must be altered first. Blockchain Uses While blockchain is widely known for its use in cryptocurrencies such as Bitcoin, Litecoin, and Ether, the technology has several other uses. For example, it enables "smart contracts," which execute when certain conditions are met. This provides an automated escrow system for transactions between two parties. Blockchain can potentially be used to allow individuals to pay each other without a central clearing point, which is required for ACH and wire transfers. It has potential to greatly increase the efficiency of stock trading by allowing transactions to settle almost instantly instead of requiring three or more days for each transaction to clear. Blockchain technology can also be used for non-financial purposes. For example, the InterPlanetary File System (IFPS) uses blockchain to decentralize file storage by linking files together over the Internet. Some digital signature platforms now use blockchain to record signatures and verify documents have been digitally signed. Blockchain can even be used to protect intellectual property by linking the distribution of content to the original source. Updated April 13, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/blockchain Copy

Blog

Blog A blog (short for "web log") is a type of website that consists of regularly-published posts — articles, journal entries, links to websites and articles, and other types of text-based content. A blog is typically maintained by a single person or a small group of contributors; it may focus on a single subject or cover a wide variety of topics based on the author's experiences, opinions, and expertise. The word "blog" may also be used as a verb to mean the act of writing and posting content to a blog, and people who write blogs are known as bloggers. Blogs are one of the most common forms of website on the Internet, and it is estimated that out of nearly 2 billion total websites, 600 million are blogs. People write blogs covering a wide variety of topics and interests, like fashion, technology, food, travel, and current events; no matter how niche a topic is, there is likely someone blogging about it. Some bloggers are prolific and update their sites several times a week (or even multiple times a day), but it is also common for someone writing a blog as a side project to update it irregularly without a set schedule. Blogging platforms like WordPress provide a technical backend for bloggers to create their sites. While these platforms require some configuration, a blogger does not need to write their own content management system or page template, so they can instead focus on writing their content. These platforms also allow bloggers to engage with their audience and build a community by asking readers to leave comments and contribute to the discussion. However, comment sections require the blogger to moderate the content of comments to prevent spam and other bad behavior. Updated December 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/blog Copy

Blu-ray

Blu-ray Blu-ray is an optical media format designed to replace DVDs by providing higher storage capacity and the ability to play back high-definition video. Blu-ray discs are used for distributing HD films and television shows, as well as video games for the PlayStation 4 / 5 and Xbox One / Series X consoles. A Blu-ray disc is the same physical size as a CD or DVD (120 mm in diameter and 1.2 mm thick) but can store significantly more data — up to 100 GB on a triple-layer disc. A standard Blu-ray disc contains HD (1080p) video compressed using the H.264 codec, which is more efficient than the MPEG-2 format used by DVD video. Like DVDs, Blu-ray discs can incorporate interactive menus, multiple language options, and bonus material. An enhanced version of the Blu-ray format, Ultra HD Blu-ray, supports 4K HDR video through a combination of high-capacity dual- and triple-layer discs and the more efficient H.265 video codec. Ultra HD Blu-ray requires a compatible Blu-ray player that supports the newer codec but is physically the same format. Stack of Blu-ray discs The name "Blu-ray" comes from the blue-violet laser the Blu-ray drive uses to read data. Like other optical disc formats, Blu-ray discs contain data encoded as a series of pits and bumps in reflective material on the underside of the disc, read by a laser in the disc drive and decoded. Blu-ray discs can store data far more densely than CDs and DVDs by using a narrower laser with a shorter wavelength — 405 nm, compared to 650 nm for DVDs or 780 nm for CDs. While streaming video services can deliver HD and 4K video over the internet without discs, Blu-ray and UHD Blu-ray discs often provide better image quality. A disc's video bitrate is typically far higher than what is provided by streaming video services, which allows the video to use less compression and contain fewer artifacts. While it's often more convenient to stream a movie than it is to find and insert a disc into a Blu-ray player, film buffs prefer the superior video they get from physical media like HD and UHD Blu-ray discs. Updated December 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/blu-ray Copy

Bluetooth

Bluetooth Bluetooth is a wireless technology that allows short-range communication between devices. Bluetooth-enabled computers, tablets, and smartphones can use it to connect to a wide variety of peripherals like wireless keyboards, fitness trackers, or headphones. It operates over the 2.4 GHz frequency band and has a typical range of about 30 feet, or 10 meters. Transferring data between two Bluetooth devices requires pairing them together. Once the pairing process is complete, the two devices can transmit data as long as they're within range of each other. They will even automatically reconnect whenever they are both turned on and nearby. While devices like computers, smartphones, or tablets can connect to several Bluetooth devices at once, peripherals can have one active connection at a time — if you want to move a peripheral to another device, you should un-pair it from the first one. Otherwise, both devices will attempt to connect to the peripheral, and only one connection may be active at a time. A Microsoft Bluetooth keyboard and mouse set The Bluetooth specification has gone through several revisions since it was first launched. Newer revisions support increased range, data transfer speed, and security. Bluetooth 4.0 also introduced a separate Bluetooth Low Energy (BLE) variant that uses significantly less power. Bluetooth Peripherals Bluetooth's low power requirements make it an ideal wireless connection for many devices. Some of the most common types of Bluetooth peripherals include: Wireless audio devices like headphones, speakers, and phone headsets. Many cars include Bluetooth-enabled infotainment systems that allow you to stream audio or make hands-free phone calls. Computer peripherals like wireless mice and keyboards, as well as video game console controllers. Smart home devices and sensors. Some devices use Bluetooth as a temporary connection while configuring a new device, while other low-power devices use it as the primary connection. Smartwatches, activity trackers, and other wearable devices. These peripherals often require a Bluetooth connection to a smartphone for Internet access and data processing. Updated February 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bluetooth Copy

BMP

BMP Stands for "Bitmap." The BMP file format is an uncompressed raster graphic file format. It is suitable for saving digital photos and other high-quality digital images. It is the native file format for the Microsoft Paint application, but since it is an open format, almost all operating systems and image-editing applications support it. BMP files support multiple color depths, ranging from simple, 1-bit monochrome images to 4-bit (16-colors), 8-bit (256 colors), 16-bit (65,536 colors), and 24-bit true color (16.7 million colors) images. However, the format does not support an alpha channel, so BMP files cannot include transparent sections. It also does not support image compression, so saving an image as a BMP file requires more disk space than saving the same image as a PNG or JPEG file. However, this also means that BMP images do not suffer from compression artifacts and perfectly preserve all of the original image data. It is rare to see BMP files shared online due to their large file size compared to compressed image formats. A BMP image open in Microsoft Paint Microsoft first developed the BMP file format as a new format for images on Windows-based computers and originally introduced it as part of Windows 1.0. They designed it to serve as a standard that any image-editing application on Windows could support, making it convenient for developers to implement and for users to create. Over time, it became one of the most common uncompressed bitmap file types. File extension: .BMP Updated September 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bmp Copy

Bookmark

Bookmark A bookmark is a saved shortcut that directs your browser to a specific webpage. It stores the title, URL, and favicon of the corresponding page. Saving bookmarks allows you to easily access your favorite locations on the Web. All major web browsers allow you to create bookmarks, though each browser provides a slightly different way of managing them. For example, Chrome and Firefox display your bookmarks in an open window, while Safari displays them in a list in the sidebar of the browser window. Internet Explorer uses the name "Favorites" to refer to bookmarks, and like Safari, it displays all your favorites in a list within the browser window sidebar. To create a bookmark, simply visit the page you want to bookmark and select Add Bookmark or Bookmark this Page from the Bookmarks menu. In Internet Explorer, you can click the star icon to open the Favorites sidebar and click Add to Favorites to add the current page to your bookmarks. The website title will show up in your bookmarks list along with the website's favicon if available. As your collection of bookmarks grows, you can create folders to organize your bookmarks into different categories. It is helpful to bookmark frequently visited websites and useful references since you don't have to remember the URLs. Additionally, you can just click the bookmarks instead of typing in the full web addresses. Some browsers even display your bookmarked pages in the autocomplete drop down menu as you type in the address bar. This allows you to visit bookmarked pages without even opening the bookmarks window or sidebar in your browser. NOTE: A bookmark only stores the location of a webpage, not store the contents of the webpage itself. Therefore, when you open a previously saved bookmark, the contents of the page may have changed since the last time you viewed it. Updated February 11, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bookmark Copy

Boolean

Boolean Boolean, or boolean logic, is a subset of algebra used for creating true/false statements. Boolean expressions use the operators AND, OR, XOR, and NOT to compare values and return a true or false result. These boolean operators are described in the following four examples: x AND y - returns True if both x and y are true; returns False if either x or y are false. x OR y - returns True if either x or y, or both x and y are true; returns False only if x and y are both false. x XOR y - returns True if only x or y is true; returns False if x and y are both true or both false. NOT x - returns True if x is false (or null); returns False if x is true. Since computers operate in binary (using only zeros and ones), computer logic can often be expressed in boolean terms. For example, a true statement returns a value of 1, while a false statement returns a value of 0. Of course, most calculations require more than a simple true/false statement. Therefore, computer processors perform complex calculations by linking multiple binary (or boolean) statements together. Complex boolean expressions can be expressed as a series of logic gates. Boolean expressions are also supported by most search engines. When you enter keywords in a search engine, you can refine your search using boolean operators. For example, if you want to look up information about the Apple iMac, but want to avoid results about apples (the fruit) you might search for "Apple AND iMac NOT fruit." This would produce results about iMac computers, while avoiding results with the word "fruit." While most search engines support boolean operators, their syntax requirements may vary. For example, instead of the words AND and NOT, the operators "+" and "-" may be required. You can look up the correct syntax in the help section of each search engine's website. Updated March 12, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/boolean Copy

Boot

Boot To boot a computer is to turn it on. Booting can refer to either a hard boot from a powered-off state or a soft boot (or "reboot") that restarts it when it's already on. Computers have a boot sequence that takes place after they power on before they are usable. The first step in the boot sequence happens when you press the computer's power button. The power supply delivers electricity to the motherboard, which initiates the boot process. The computer starts the POST (Power On Self Test), which verifies the integrity of the CPU, memory, and other components. Next, the computer loads startup instructions from its ROM — either the BIOS (in older computers) or the UEFI (in modern computers). It checks its storage devices and loads the operating system from the boot disk. Once the operating system loads, the computer is ready to use. In most cases, there's nothing that you need to do during the boot process. However, you are allowed to intervene at certain times. After the POST, the computer displays a list of keys you can press to change BIOS settings and the boot disk order, although the exact keys used may vary. Updated February 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/boot Copy

Boot Disk

Boot Disk A boot disk, or startup disk, is a storage device from which a computer can "boot" or start up. The default boot disk is typically a computer's internal hard drive or SSD. This disk contains files required by the boot sequence as well as the operating system, which is loaded at the end of the startup process. While it is most common to boot a system from the primary internal storage device, most computers allow you to boot from other disks. This include external hard drives, CDs, DVDs, and USB flash drives. Starting up from a secondary boot disk can be useful for running diagnostics and disk utilities on the primary disk. It may be necessary if the computer cannot start up from the primary drive. Some operating systems require separate partitions for boot files and operating system files. These partitions are called the "boot partition" and "system partition." There are four requirements for a storage device to function as a boot disk: The media must be supported by the hardware (for example, a DVD startup disk requires a CD/DVD drive. It must be formatted for the correct system (for example, NTFS for Windows or APFS for Mac). It must contain the boot files required for startup. It must contain an operating system that is supported by the computer. Several disk utilities allow you to create a custom boot disk, which you can save to a DVD or thumb drive. These boot disks include a rudimentary "utility" operating system that can run diagnostics and repairs on the primary drive. The goal of this type of boot disk is to start up a computer in case of a drive failure and repair the primary drive so it can be used as the startup disk again. NOTE:Some operating systems support network booting. This loads the startup files and operating system over a network connection. The data may be downloaded from a LAN or the Internet. Updated May 2, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/boot_disk Copy

Boot Sector

Boot Sector The boot sector is a dedicated section of a hard disk or other storage device that contains data used to boot a computer system. It includes the master boot record (MBR), which is accessed during the boot sequence. The boot sector is typically located at the very beginning of a disk, before the first partition. It includes the partition map, which identifies all the partitions on the disk. It also defines which partition contains the startup data (such as the operating system). This lets the computer know which partition to access when starting up. Therefore, if a disk's boot sector becomes corrupted or contains invalid data, the computer may not be able to start up from the disk. If this happens, you should run a disk utility or antivirus program to try and fix the problem. Updated July 30, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bootsector Copy

Boot Sequence

Boot Sequence A computer's boot sequence is the series of steps it takes every time it boots. The boot sequence activates the computer's hardware components and loads the operating system into memory. In older computers, the BIOS is responsible for initiating the boot sequence; in modern computers, the UEFI controls it. The boot sequence starts when you first power the computer on. Once it has power, the BIOS or UEFI instructs the CPU to execute some startup code. It performs a quick POST to check whether the computer's hardware components (including the CPU, RAM, and storage devices) are all functioning. If anything fails the POST check, then the computer displays an error message and will not boot. After the POST, you'll be shown a prompt to press a specific key on the keyboard to access the BIOS or UEFI settings (most often the Delete key) or manually select a boot disk (typically by pressing F12 or another function key). If you don't choose one of those options, the computer automatically selects a default boot disk that contains an operating system and begins loading it into memory. Once the operating system is fully loaded, it displays either the desktop or a login screen, and the computer is ready to use. Boot disk priority options in the UEFI settings of an Aorus motherboard A boot sequence may only take a few seconds, or it may take up to a few minutes; this depends on the speed of the computer's CPU, RAM, and storage devices. Booting from an SSD is faster than booting from a hard drive; booting from an external USB drive or an optical disc will be even slower. If the computer shut down unexpectedly, the boot sequence may take longer while it runs additional disk and hardware checks to see whether any component is malfunctioning. Updated October 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/boot_sequence Copy

Bootstrap

Bootstrap Bootstrap, or bootstrapping, is a verb that comes from the saying, "to pull oneself up by his bootstraps." The idiom implies a person is self sufficient, not requiring help from others. Similarly, in the computing world, bootstrapping describes a process that automatically loads and executes commands. The most fundamental form of bootstrapping is the startup process that takes place when you start up a computer. In fact, the term "boot," as in booting up a computer, comes from the word bootstrap. When you turn on or restart a computer, it automatically loads a sequence of commands that initializes the system, checks for hardware, and loads the operating system. This process does not require any user input and is therefore considered a bootstrap process. While bootstrapping is often associated with the system boot sequence, it can be used by individual applications as well. For example, a program may automatically run a series of commands when opened. These commands may process user settings, check for updates, and load dynamic libraries, such as DLL files. They are considered bootstrap processes because they run automatically as the program is starting up. NOTE: Bootstrap is also a popular web development framework used for creating websites. It was developed by a team at Twitter and has been an open source project since 2011. The Bootstrap framework includes CSS styles, JavaScript libraries, and HTML files. Bootstrap provides a way for developers to easily build responsive websites rather than designing them from scratch. Updated April 14, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bootstrap Copy

Bot

Bot A bot (short for "robot") is an automated program that runs over the Internet. Some bots run automatically, while others only execute commands when they receive specific input. There are many different types of bots, but some common examples include web crawlers, chat room bots, and malicious bots. Web crawlers are used by search engines to scan websites on a regular basis. These bots "crawl" websites by following the links on each page. The crawler saves the contents of each page in the search index. By using complex algorithms, search engines can display the most relevant pages discovered by web crawlers for specific search queries. Chat bots were one of the first types of automated programs to be called "bots" and became popular in the 1990s, with the rise of online chatrooms. These bots are scripts that look for certain text patterns submitted by chat room participants and respond with automated actions. For example, a chat bot might warn a user if his or her language is inappropriate. If the user does not heed the warning, the bot might kick the user from the channel and may even block the user from returning. A more advanced type of chat bot, called a "chatterbot" can respond to messages in plain English, appearing to be an actual person. Both types of chat bots are used for chatroom moderation, which eliminates the need for an individual to monitor individual chatrooms. While most bots are used for productive purposes, some are considered malware, since they perform undesirable functions. For example, spambots capture email addresses from website contact forms, address books, and email programs, then add them to a spam mailing list. Site scrapers download entire websites, enabling unauthorized duplication of a website's contents. DoS bots send automated requests to websites, making them unresponsive. Botnets, which consist of many bots working together, may be used to gain unauthorized access to computer systems and infect computers with viruses. Updated February 14, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bot Copy

Botnet

Botnet A botnet is a group of computers that are controlled from a single source and run related software programs and scripts. While botnets can be used for distributed computing purposes, such as a scientific processing, the term usually refers to multiple computers that have been infected with malicious software. In order to create a malicious botnet, a hacker must first compromise several computers. This might be done by exploiting a security hole through a Web browser, IRC chat program, or a computer's operating system. For example, if a user has turned off the default firewall settings, his or her computer may be susceptible to such a botnet attack. Once the hacker has gained access to several computers, he can run automated programs or "bots" on all the systems at the same time. A hacker may create a botnet for several different purposes, such as spreading viruses, sending e-mail spam, or crashing Web servers using a denial of service attack. Botnets can range from only a few computers to several thousand machines. While large botnets can cause the most damage, they are also easiest to locate and break apart. The unusual amount of bandwidth used by large botnets may trigger an alert at one or more ISPs, which might lead to the discovery and dismantling of the botnet. In most situations, users do not know that their computers have become part of a botnet. This is because hackers typically hide their intrusion by masking the activity within regular processes, similar to a rootkit attack. Therefore, it is a good idea to install antivirus or anti-malware software that regularly checks for such intrusions on your computer. It is also wise to make sure your system firewall is turned on, which is usually the default setting. Updated June 9, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/botnet Copy

Bounce

Bounce The term "bounce" has several different IT related meanings, yet none of them include bouncy balls. The most common definition of bounce used in the computer world refers to e-mail messages. 1. Returning E-mail When you send an e-mail message to another person, the mail server processes the message and delivers it to the appropriate user's mailbox. For example, if you send a message to "mrman@mail.com," the mail.com server looks for a user named "mrman" to deliver the message to. If the user does not exist, the mail server may bounce the message back to the sender, saying "Sorry, that user does not exist." These messages often come from "Mail Delivery Subsystem" and have a subject line that reads "Returned mail: see transcript for details." If you receive a bounced message, you may want to check the e-mail address you sent the message to and make sure it was typed correctly. If the address is correct, it may help to read the body of the bounced message for more details. The transcript may say something like "User quota over limit," which means the recipient has reached his or her e-mail quota and must delete some messages and/or attachments in order to receive new mail. If this is the case, you may want to call the person or use an alternative e-mail address to let the person know he or she has some Inbox maintenance to do. 2. Restarting a Computer The term "bounce" can also describe the process of rebooting or restarting a computer. For example, a workstation may need to be bounced after installing new software. Similarly, a Web server may be bounced if websites hosted on the server are not responding correctly. 3. Exporting Audio "Bounce" can also describe the process of exporting several tracks in an audio mix to one mono track or two stereo tracks. This helps consolidate audio tracks after they have been mixed. Bouncing audio tracks limits the need for processing power since the computer only has to process one track instead of all the tracks individually. Digital Performer is the primary audio software program that uses bouncing to export audio. 4. Hiding a Network Connection Finally, "bouncing" can also be used in networking to describe a method of hiding the source of a user's network connection. This type of bouncing is often abbreviated "BNC." Someone who bounces his network connection is called a "bouncer," though this is not the same person who checks your ID at the bar. Updated October 18, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bounce Copy

bps

bps Stands for "Bits Per Second." Bps is a standard way to measure data transfer rates, such as network connection and Internet download speeds. In the early days of the Internet, data transfer speeds were measured in bps. As Internet connection speeds increased, a variation of bps – Kbps (1,000 bps) – became more common. Today, Internet connection speeds are often measured in Mbps (1,000,000 bps). Some networks support speeds over 1,000 Mbps and are measured in Gbps. It is important to abbreviate bits per second as "bps" (with a lowercase "b"). The lowercase "b" signifies bits rather than bytes. Since a byte is eight bits, 100 Bps is equal to 800 bps. Variations of Bps, such as megabytes per second or gigabytes per second are typically abbreviated with a slash, such as MB/s or GB/s. These measurements are commonly used to define transfer speeds of internal components, such as HDDs and SSDs. Updated February 20, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bps Copy

Breadcrumbs

Breadcrumbs Breadcrumbs are a user interface element designed to make navigation easy and intuitive. They are used by operating systems, software programs, and websites. Breadcrumbs display the directory path of the current folder or webpage and provide one-click access to each of the parent directories. Like breadcrumbs in the story "Hansel and Gretel," they allow you to retrace your steps back to where you started. The Windows operating system displays breadcrumbs in the toolbar of each open window. If you open the "Public" user folder, for example, the toolbar will display the current location as: Computer → Local Disk (C:) → Users → Public This provides a simple way to view the directory path of the current folder. Each of directories listed in the toolbar are clickable, providing quick access to the parent folders. Webpages often include breadcrumbs near the top of the page, though they are usually placed outside of the main navigation bar. The purpose of website breadcrumbs is twofold: 1) to clearly identify what section of a website a specific webpage is located, and 2) to make it easy for you to jump to the parent sections. For example, a soccer page on a news website may include breadcrumbs Home : Sports : Soccer : Premier League near the top of the page. This indicates the current page is three sections deep within the website. Since each section name is also a link, you can quickly jump to any of the parent sections, such as "Soccer," by simply clicking the link within the breadcrumbs. Updated May 21, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/breadcrumbs Copy

Bricking

Bricking Bricking is when an electronic device becomes unusable, often from a failed software or firmware update. If an update error causes system-level damage, the device may not start up or function at all. In other words, the electronic device becomes a paperweight or a "brick." The more foundational an update is, the greater its potential to brick a device. For example, a firmware update is more likely to cause bricking than an operating system update. An OS update is higher risk than an application update. Bricking may be caused by an invalid update (such as an attempt to "jailbreak" a phone) or an update that does not finish successfully. To avoid incomplete updates, most firmware and OS updates will not run unless the device is plugged into a power source. Developers can use preventative measures to prevent bricking. For example, operating system updates typically wait to overwrite old system files until the new files have been fully downloaded and verified. Still, if an update fails at a crucial moment, such as when files are being overwritten, the device may become unusable. If the system cannot boot up and is not updatable, the device is bricked. Hard vs Soft Bricking There are two types of bricking — hard and soft. A hard brick is when a device shows little or no signs of life. For example, the power button may light up, but the screen will not turn on. A soft-bricked device functions enough to display an error. For instance, on startup, the screen may display an error message or icon that indicates the operating system is unable to load. A bricked device may or may not be recoverable. Soft bricks are easier to resolve than hard bricks and can often be fixed by the user. Hard bricks may require special recovery tools only available through the device manufacturer. Therefore, if your device is hard-bricked, you may want to bring it to an authorized repair shop before relegating it to a paperweight. Updated January 10, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bricking Copy

Bridge

Bridge A bridge is a device that provides a connection between two or more local area networks. It allows devices on separate networks to communicate as though they were on the same LAN. A bridge may connect two networks of the same type (e.g. bridging two Ethernet networks) or two different types (e.g. bridging a Wi-Fi network and an Ethernet network). Network bridges can take several different forms. Many bridges are dedicated hardware devices that only provide a bridge with no additional features; even though a bridge and a router may look similar, a bridge does not assign IP addresses, keep a log of network activity, or provide a firewall. Dedicated Ethernet bridges typically only have two ports to connect to and bridge two routers or switches. Most Wi-Fi bridges have a single Ethernet port and connect to a single Wi-Fi network, giving the device or network connected to the Ethernet port access to the Wi-Fi network as if they were the same network. Many Wi-Fi access points include a bridging mode feature, although you must activate it through the device's settings before it's enabled. Bridges and gateways are similar networking devices that both allow separate networks to communicate. Bridges link two LANs that use the same protocol, and operate at the data link layer (level 2 of the OSI model); gateways link networks of different types and work across multiple OSI link layers, converting protocols when necessary. NOTE: A computer with multiple network adapters may also function as a bridge through software. Windows, macOS, and Unix-based operating systems all include features that allow a computer to bridge two network connections, regardless of the type of connection. Updated January 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bridge Copy

Broadband

Broadband Broadband refers to high-speed data transmission over a single cable, carrying a large amount of data across multiple signals simultaneously. It commonly refers to a high-bandwidth, always-on Internet connection. The most common types of broadband connections are DSL, cable, fiber, and mobile wireless. Different countries have different standards for what is considered broadband. It generally applies to any Internet connection that is both faster than dial-up or ISDN and is always connected. The speed at which an Internet connection is deemed broadband changes over time as technology improves. For example, the US FCC once considered a connection that offered 4 Mbps download / 1 Mbps upload to be broadband before later updating their standards to 25 Mbps down / 3 Mbps up. Types of Broadband Internet Several different types of broadband Internet connection are commonly available. Some require dedicated lines and only operate in tight geographic areas, while others are more widely available. Speeds, prices, and the quality of service can vary greatly based on the type of connection. Digital Subscriber Line, or DSL, is a type of broadband connection that operates over telephone lines. A DSL line's speed depends on several factors — the quality of the line, the number of other customers in an area, and the distance between the customer and the ISP. Typical speeds range from 1-5 Mbps on the low end up to 100 Mbps. Cable Internet operates over coaxial cables originally designed for cable television service. Like DSL, a high number of customers in an area can result in a slower speed for everyone, so many cable ISPs implement monthly data caps to limit usage. Speeds range from 30-100 Mbps on the low end to more than 1 Gbps. Fiber Internet service runs over dedicated fiber optic cables. Fiber networks are not as affected by local network traffic as DSL and cable, so most fiber connections are uncapped and offer symmetrical download and upload speeds. Fiber's availability requires the local ISP to run new lines, so it is more geographically limited than options that use the existing phone and cable lines. Fiber Internet speeds range from 100 Mbps on the low end up to 2 Gbps. Wireless Internet can run over normal cellular networks or a fixed wireless connection. Customers of 4G and 5G mobile carriers can use dedicated hotspots or their own mobile phones to connect their homes to the Internet. Fixed wireless uses lower frequencies that travel from the ISP's radio tower to a dedicated outdoor radio receiver. Connection quality depends on the type of service, the signal strength, and whether there is a clear line of sight to the tower. Speeds range from 5-10 Mbps up to 1 to 2 Gbps. Satellite Internet is delivered from satellites in orbit to a dish or receiver mounted outside of a home. Since a signal has to travel to orbit and back, satellite Internet has higher latency than other types of broadband. It is generally more expensive than wired connections but can operate nearly anywhere. Satellite signals also require a line of sight to the sky and can suffer from interference caused by trees and weather conditions. Speeds range from around 10 Mbps up to 500 Mpbs. Updated October 31, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/broadband Copy

Broadcast

Broadcast A broadcast is a transmission of information to a large group of people. Originally, it referred to radio and television broadcasts. Today, it also describes digital broadcasts sent over the Internet. Over-The-Air Broadcasting Early broadcasts, such as AM and FM radio, were produced using analog signals transmitted over the air on different frequencies. In the early 2000s, several over-the-air or "OTA" broadcasts either converted to or added digital signals, such as HDTV, HD radio, and satellite radio. Today, many broadcasting companies transmit both analog and digital signals. While cable TV is not considered OTA broadcasting, it went through a similar analog-to-digital transformation in the early 2000s. A digital signal supports higher-resolution audio and video and can also be compressed to save bandwidth. Internet Broadcasting Internet broadcasting is the distribution of content from a single source to multiple recipients. It is similar to multicasting, but does not limit the number of users. For example, a multicast may transmit a video stream to 50 people, while a broadcast is available to all the users who tune into the stream. Most streams are considered broadcasts since they are not targeted to specific users. Similar to a radio broadcast, which only transmits one signal, an Internet broadcast only sends one copy of the data. However, Internet broadcasts are not limited by physical distance and are accessible to users around the world. When individuals connect to a stream or other Internet broadcast, the data is rerouted to multiple locations. By the time the broadcast reaches all the connected users, it may be duplicated hundreds or thousands of times. Updated September 19, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/broadcast Copy

Brouter

Brouter A brouter is a device that functions as both a bridge and a router. It can forward data between networks (serving as a bridge), but can also route data to individual systems within a network (serving as a router). The main purpose of a bridge is to connect two separate networks. It simply forwards the incoming packets from one network to the next. A router, on the other hand, is more advanced since it can route packets to specific systems connected to the router. A brouter combines these two functions by routing some incoming data to the correct systems, while forwarding other data to another network. In other words, a brouter functions as a filter that lets some data into the local network, while redirecting unrecognized data to another network. While the term "brouter" is used to describe bridge/router device, actual brouters are pretty rare. Instead, most brouters are simply routers that have been configured to also function as a bridge. This functionality can often be implemented using the router's software interface. For example, you may configure a router to only accept data from specific protocols and data sources, while forwarding other data to another network. NOTE: Since routers are more complex than bridges, it is more likely for router than a bridge to function as a brouter. Therefore, brouters are also called bridging routers. Updated April 19, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/brouter Copy

Brownfield

Brownfield Brownfield is a construction term that describes previously developed land. In the IT industry, it refers to previously developed software. Brownfield software development is software that is built from an existing program. It may be contrasted with "greenfield" development, which involves creating a software program from scratch. Since the software industry has been around for several decades, the vast majority of software development is brownfield. For example, each new version of Adobe Photoshop and Microsoft Word is developed as a brownfield project. Even modern video games, such as Call of Duty: Black Ops 3 and StarCraft 2, are created from earlier versions of the software. Brownfield vs Greenfield Brownfield software development has many advantages over greenfield projects. For instance, whenever a software company releases an update to a program, they have a good idea of the market for the software. They also know what features and style of interface their users expect. Adding new features and interface enhancements is less time consuming than developing a program from scratch. Therefore, brownfield development is less costly and involves less risk than greenfield development. Updated December 18, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/brownfield Copy

Browser Engine

Browser Engine A browser engine, also known as a rendering engine or layout engine, is a central software component in a web browser. It translates HTML and CSS from plain text marked up with tags into the content you see on the screen — setting up the page layout, styling text, and placing images. The browser engine also handles navigation between pages using hyperlinks. In addition to laying out the elements of a web page, the browser engine also creates a document object model (DOM) for each page, which organizes each page into standard elements like its title, body, and headers. The browser engine is also closely integrated with the browser's JavaScript engine, which executes a web page's JavaScript code. Updates to a browser engine can add support for new file formats, fix bugs, or support new features added to the HTML and CSS specifications. The Safari browser on desktop and mobile uses the WebKit browser engine Despite the wide variety of web browsers that are available, most use one of a handful of browser engines. Google Chrome uses the Blink browser engine; other web browsers based on the Chromium code base, like Microsoft Edge and the Brave browser, also use Blink. Safari on macOS, iOS, and iPadOS uses the WebKit engine, and Mozilla Firefox uses the Gecko browser engine. Other applications may include browser engines to display HTML content — for example, email clients that show HTML email use a browser engine to render messages, while apps built using a Chromium-based framework like Electron create their entire user interface using a browser engine. NOTE: In early web browsers, rendering engines and browser engines were separate components, with the browser engine giving commands to the rendering engine. Modern web browsers integrate the two components so tightly that they are now considered the same. Updated May 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/browser_engine Copy

Browser Extension

Browser Extension A browser extension is a small software add-on that adds new functionality to a web browser. It may change how a built-in feature works or add new features entirely. For example, an extension may add a way to translate selected text or change how tabs appear when a browser is used full-screen. Extensions can also integrate website features directly into the browser, like tweeting a link to the current webpage, or saving an image to a Pinterest board. Many browsers have a central repository of extensions where you can browse and install extensions, like the Chrome Web Store or the Safari Extensions section of Apple's App Store. These central locations sort extensions into several categories and provide descriptions, screenshots, and user reviews for each one. You can install extensions directly from a developer's website, but remember to pay close attention to what data the extension will access. An extension from a disreputable developer could turn out to be malware that steals passwords or secretly redirects your traffic. After installing an extension, an icon for it will usually appear on the browser's toolbar. Clicking the icon will open a menu where you can invoke the extension or customize its settings. A browser that supports extensions will also provide a way to manage them, such as enabling or disabling them at any time, or removing ones that are no longer needed. NOTE: Browser extensions are each written for a specific browser, so one designed for Safari won't be compatible with Firefox or Chrome. However, two browsers that share an underlying browser engine (like the Chromium engine that powers both Google Chrome and Microsoft Edge) can share extensions. Updated September 21, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/browser_extension Copy

Brute Force Attack

Brute Force Attack A brute force attack is an attempt to gain access to a system using successive login attempts. It can be performed manually or by using an automated script. In either case, a brute force attack tries different username and password combinations with the hope of discovering a valid login. While brute force attacks are simplistic by nature, their implementation is often complex. Since most servers will block a user or IP address after multiple failed logins, a hacker may use multiple systems to perform a single brute force attack. Some attacks may use hundreds or even thousands of devices, similar to a distributed denial of service DDoS attack. While the odds of guessing a correct login via a brute force attack are low, it is still one of the most common ways online accounts are compromised. Using enough attempts, it is theoretically possible to discover any login. However, short and common passwords are the most vulnerable. How to Protect Against Brute Force Attacks The two primary ways to protect your online accounts from brute force attacks are to 1) choose strong passwords and 2) use two-factor authentication. 1. Choose strong passwords A fundamental step in securing any online account is to choose a strong password. This means choosing a password that: is long – at least eight characters, preferably 12 or more. contains special characters – including numbers and symbols, as well as lowercase and uppercase characters. is not personally identifiable – using a special date or the name of someone close to you makes it easy for someone to manually hack your account. It is especially important to choose a strong password for your email account since your username (half of your login) is your public email address. Additionally, if someone gains access to your email, he or she can easily discover your other passwords. 2. Use Two-Factor Identification Some services allow you to enable two-factor authentication, which requires authentication from two devices. For example, you may be asked to enter a username and password on your computer, followed by a code sent via text to the phone number listed in your account. With two-factor authentication, even if a hacker knows your username and password, he or she will not be able to successfully log in to your account. Updated June 17, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/brute_force_attack Copy

BSOD

BSOD Stands for "Blue Screen of Death." The BSOD is an error message displayed by Windows when a non-recoverable error occurs. The "blue screen" refers to the blue background color that fills the entire screen behind the error message. It is called the "blue screen of death" because it is displayed when the computer has encountered a "fatal error" and must be restarted. Technically, the BSOD is caused by a Windows STOP error. STOP errors are critical system-level errors that cause the computer to stop responding in order to prevent data corruption or damage to the hardware. Several different problems can cause a STOP error, including kernel crashes, boot loader errors, driver malfunctions, and hardware faults. When a STOP error occurs, the BSOD is displayed, along with a message that provides information about the specific error that has occurred. The good news is that the BSOD protects your data and prevents damage to your PC. The bad news is that it halts the entire system, which may cause you to lose unsaved files that you were currently working on. In some cases, the BSOD will produce a "memory dump" of the current data stored in RAM, which is saved to the primary storage device. This dump can be opened and analyzed after the computer is restarted, though it is often difficult to extract usable data from this file. Your best chance of recovering work after a BSOD is if the application you were using includes an "autosave" feature that saves a temporary version of your work at regular intervals. If your computer crashes while you are working, the program should automatically open the temporary file the next time you launch the program. NOTE: In Windows 8, the blue screen of death has a softer "aqua" blue background and often provides more readable error messages than in previous versions of Windows. However, the computer still must be restarted when the BSOD is displayed. Updated June 14, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bsod Copy

Buffer

Buffer A buffer is a temporary, intermediate storage area in a computer's memory that holds data while it transfers between two locations. These locations may be physical (for example, separate computers on a network or hardware components inside a computer) or virtual (two programs running on one computer). A buffer smooths out a data transfer when one side has a slower connection or less processing ability than the other side. It allows some data to build up in memory to provide the receiving end with a steady data flow, even if the sending end experiences fluctuations or interruptions. The most visible example of a buffer most computer users see is when a streaming video downloads for a few seconds before it starts playing. Buffering the video allows the computer playing the video to play it back without pausing, even if the connection is not fast or consistent enough to play it directly over the Internet. If the connection drops entirely, the video still plays back until it reaches the end of the buffer (or the end of the video if the computer managed to buffer the entire thing). A streaming video buffering before playback Many other aspects of computers use buffers to help manage data transfers, using part of the system RAM or some dedicated memory on a hardware component. For example, hard drives include a built-in memory buffer to store data during read-and-write operations, since hard drives cannot write data as fast as the storage controller sends it. Video cards render 2D and 3D graphics to a frame buffer before sending the resulting images to the monitor to keep the video frame rate smooth. Even mouse clicks and keystrokes first go to a memory buffer for a moment before the operating system processes them as input. Updated December 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/buffer Copy

Bug

Bug In the computer world, a bug is an error in a software program. It may cause a program to unexpectedly quit or behave in an unintended manner. For example, a small bug may cause a button within a program's interface not to respond when you click it. A more serious bug may cause the program to hang or crash due to an infinite calculation or memory leak. From a developer perspective, bugs can be syntax or logic errors within the source code of a program. These errors can often be fixed using a development tool aptly named a debugger. However, if errors are not caught before the program is compiled into the final application, the bugs will be noticed by the user. Because bugs can negatively affect the usability of a program, most programs typically go through a lot of testing before they are released to the public. For example, commercial software often goes through a beta phase, where multiple users thoroughly test all aspects of the program to make sure it functions correctly. Once the program is determined to be stable and free from errors, it is released the public. Of course, as we all know, most programs are not completely error-free, even after they have been thoroughly tested. For this reason, software developers often release "point updates," (e.g. version 1.0.1), which include bug fixes for errors that were found after the software was released. Programs that are especially "buggy" may require multiple point updates (1.0.2, 1.0.3, etc.) to get rid of all the bugs. Updated December 17, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bug Copy

Bug Bounty

Bug Bounty A bug bounty is a reward offered by the owners of a website, software company, or other business to outside individuals in return for finding and reporting bugs in a system. Companies offer these rewards to incentivize ethical white-hat hackers to identify security holes before criminal hackers do. Rewards are usually a mix of financial compensation and professional recognition for the hacker. While most software companies employ their own in-house security researchers to find and resolve possible security holes, an outside perspective is always valuable. A zero-day exploit discovered and used by a black-hat hacker can cause a company significant financial loss, a public relations fiasco, or even legal liability. The bounty paid to a hacker to identify a potential security problem is often money well spent. The amount of money offered per bug varies, depending on the company offering the bounty and the expected impact of the identified bug. Small bounties are often a few hundred dollars, while more impactful bugs can fetch tens or even hundreds of thousands of dollars. The biggest bug bounties from large tech companies like Apple, financial institutions, and cryptocurrency blockchain groups can pay a skilled hacker several million dollars for a single bug. Updated November 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bug_bounty Copy

Build

Build In software development, "build" is used as both a verb and a noun. The verb refers to the build process, or steps, required to create an application. The noun describes a specific version of a program. Build (verb) Creating an executable software program is often described as "building" an app because of all the steps involved. These steps are collectively known as the "build process," which includes the following: Checking source code for syntax errors Compiling source code into machine code Linking libraries and other resources, such as images and media files, to the app Generating a runnable application or executable file Programmers typically use a software development application, such as an IDE, to automate the build process. Many IDEs include a "Build" or "Build and Run" command that performs all the build steps and outputs an executable application. Build (noun) Each version of a software program is called a build. Some builds may be internal (not released to the public), while others are official releases. For example, a developer may test and refine several internal builds before releasing a stable build to the public. Since each build is unique, the build number or ID must also be unique. Some developers append the build ID to the public version, while others display it as a separate value. Many applications, especially smaller ones, do not include a public build number. The build ID is typically the fourth number (after the third dot) when appended to the public version. Standard software versioning uses the following convention: Major version . minor version . patch . build For example, app version 4.2.17.2895 can be broken down as: 4 - Major version 2 - Minor version 17 - Patch 2895 - Build NOTE: In some cases, developers release different builds between automatic software updates. For example, a developer may publish a new build after making a minor update, such as fixing a typo. But the update is not significant enough to push out the latest version to all users. Instead, a developer may wait until the next minor version or patch release to notify users of the update. Updated October 26, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/build Copy

Burn

Burn "Burning" is the process of writing data to a writable or rewritable optical media disc, like a CD-R or DVD-R. The term comes from a disc drive's laser burning a series of dots into a layer on the underside of the disc. These pits, and the gaps between them, contain the digital data encoded on the disc. A professionally-pressed CD or DVD has a series of pits stamped into a layer of reflective metal under a protective layer of plastic. When an optical disc drive reads the disc, a laser focused on the underside reads the flat areas (which reflect the laser light) and the pits (which scatter the laser light), then decodes that data. A writable disc has an extra layer of transparent dye between the plastic and the metal. The recordable disc drive uses its laser to burn a series of dots into the dye layer. When read later, these dots scatter the laser light the same way the pits stamped into a pressed disc do. Rewritable discs are burned just like writable discs are, but instead of a layer of dye, the drive's laser burns the data onto a reflective layer of a phase change metal alloy. This layer starts in a crystalline form that light can pass through to the reflective metal layer underneath. The disc drive's laser burns data to this layer by heating the alloy past its melting point, which turns opaque as it cools. The disc drive can erase a rewritable disc by using its laser to heat the alloy back to its crystallization point. NOTE: Burning data to a disc is more delicate than writing data to a hard disk or flash media. First, a computer organizes data into the correct file system for the disc. It must also keep that data in a buffer to write to the disc at a constant pace. If this buffer runs out of data, called a "buffer underrun," the burn will fail. A rewritable disc can be erased for another attempt, but a write-once CD-R, DVD-R, or DVD+R disc is ruined and destined for the trash. Updated November 21, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/burn Copy

Bus

Bus A computer bus is a physical connection that links a computer's components on the motherboard. Each bus consists of a circuit between two or more chips, expansion cards and slots, or ports using a series of small wires. These circuits let the components exchange data while also providing power from the motherboard. At any given time, each wire in a bus can either carry a voltage or not, showing one of two states — on (1) or off (0) — as a single bit of data. The number of wires in a bus determines how much data it can transfer in a single clock cycle, referred to as its width and measured in bits. For example, a bus with 64 wires can send 64 bits at once, making it a 64-bit bus. Buses also have a speed, measured in megahertz, that indicates how many times per second it transfers data. The primary bus that connects a computer's CPU to the rest of the motherboard's components is known as the "system bus." The system bus includes three separate buses that handle different tasks: The address bus carries information about the physical address of the memory location that is being read or written. The total number of addressable memory locations depends on the width of the address bus — a 16-bit bus can address 65,536 memory locations (64 kilobytes), while a 32-bit bus can address 4,294,967,296 locations (4 gigabytes). The data bus carries the actual data to and from the system's RAM. A wider data bus can transfer more information per clock cycle. The control bus carries commands between the CPU and the rest of a computer's components. It also carries status reports from devices back to the CPU. In addition to the system bus, computers contain other buses that connect components and peripherals to their respective controller chips. Internal buses (also called local buses) connect components inside the computer, while external buses connect the ports used for attaching peripherals. For example, a computer's internal memory bus connects the RAM slots to the memory controller, while the external USB bus connects USB ports to the I/O controller. Updated January 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/bus Copy

Byte

Byte A byte is a data measurement unit that contains eight bits, or a series of eight zeros and ones. A single byte can be used to represent 28 or 256 different values. The byte was originally created to store a single character since 256 values are sufficient to represent all lowercase and uppercase letters, numbers, and symbols in western languages. However, since some languages have more than 256 characters, modern character encoding standards, such as UTF-16, use two bytes, or 16 bits for each character. With two bytes, it is possible to represent 216 or 65,536 values. While the byte was originally designed to store character data, it has become the fundamental unit of measurement for data storage. For example, a kilobyte contains 1,000 bytes. A megabyte contains 1,000 x 1,000, or 1,000,000 bytes. A small plain text file may only contain a few bytes of data. However, many file systems have a minimum cluster size of 4 kilobytes, which means each file requires a minimum of 4 KB of disk space. Therefore, bytes are more often used to measure specific data within a file rather than files themselves. Large file sizes may be measured in megabytes, while data storage capacities are often measured in gigabytes or terabytes. NOTE:One kibibyte contains 1,024 bytes. One mebibyte contains 1,024 x 1,024 or 1,048,576 bytes. Updated May 2, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/byte Copy

Bytecode

Bytecode Bytecode is program code that has been compiled from source code into low-level code designed for a software interpreter. It may be executed by a virtual machine (such as a JVM) or further compiled into machine code, which is recognized by the processor. Different types of bytecode use different syntax, which can be read and executed by the corresponding virtual machine. A popular example is Java bytecode, which is compiled from Java source code and can be run on a Java Virtual Machine (JVM). Below are examples of Java bytecode instructions. new (create new object) aload_0 (load reference) istore (store integer value) ladd (add long value) swap (swap two values) areturn (return value from a function) While it is possible to write bytecode directly, it is much more difficult than writing code in a high-level language, like Java. Therefore, bytecode files, such as Java .CLASS files, are most often generated from source code using a compiler, like javac. Bytecode vs Assembly Language Bytecode is similar to assembly language in that it is not a high-level language, but it is still somewhat readable, unlike machine language. Both may be considered "intermediate languages" that fall between source code and machine code. The primary difference between the two is that bytecode is generated for a virtual machine (software), while assembly language is created for a CPU (hardware). Updated January 23, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/bytecode Copy

C

C C is a general-purpose, high-level programming language. Created at Bell Labs in the 1970s, its developers first used it to write programs for the Unix operating system. C is very efficient at using a computer's memory. It also gives programmers more direct access to a computer's hardware than other high-level languages. For those reasons, it is now widely used to write operating system kernels and device drivers. C is an early example of a structured programming language, which divides a larger program into smaller modules or functions. Structured languages are easier to manage and reduce the complexity of the program's source code. It is also a compiled compiled language that requires a program's source code to be compiled into machine language before it can run. Since its introduction, C has inspired other programming languages. C++, Java, Objective-C, and C# are object-orientated languages heavily influenced by C. Many other languages base their syntax on C, including JavaScript, PHP, and Perl. Updated March 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/c Copy

C#

C# C# (pronounced "C Sharp") is a programming language developed by Microsoft. It was introduced in 2002 with version 1.0 of Microsoft's .NET Framework. Since then, C# has gone through several revisions, corresponding with each .NET update. Today, it is one of the most popular programming languages for creating Windows programs and web applications. C# is a derivative of the C programming language and is similar to C++. It uses the same basic operators as C++, is object oriented, case sensitive, and has nearly identical syntax. However, there are several differences between C# and C++. Below are just a few examples: Arrays in C++ are pointers, while in C#, they are objects that may include methods and properties. The bool (boolean) data type is not recognized as an integer as it is in C++. The keywords typedef, extern, and static all have different meanings in C# than they do in C++. C# switch statements do not support fall-through from one case to another. Global methods and variables are not supported in C#, while they are in C++. Most importantly, C# is designed specifically for Microsoft's .NET Framework. This allows developers to take advantage of all the features offered by the .NET API. However, it also means C# applications can only run on platforms that support .NET runtime, such as Windows, Windows Server, and Windows Phone. In order for programs written in C# to run on other platforms, the code must be compiled using a conversion tool like Microsoft .NET Native. NOTE: The name "C#" comes from the musical note "C♯," implying it is a step up from the original version of C. The ♯ symbol is also comprised of four plus signs, which may imply C# is more advanced than C++ as well. Updated June 4, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/c_sharp Copy

C++

C++ C++ (pronounced "C plus plus") is general-purpose programming language, created as an extension of the original C language. It uses nearly identical syntax to C but adds new features for object-oriented programming. It also includes more advanced memory management features, like dynamic memory allocation, and a more comprehensive standard library of pre-written functions. Introduced in 1985, C++ is still one of the most widely used programming languages for developing software. It retains many of the low-level capabilities of C that allow developers to quickly and efficiently access the computer's hardware and memory. This makes C++ a useful language for developing operating systems, embedded IoT software, and video games. Since C++ is an extension of the original C language, the two languages share most of their syntax. Like C, C++ is a compiled language that requires a program's source code to be compiled into machine language before it can run. Both are structured programming languages that divide programs into small functions or modules, and they share the same control structures like loops, if-else statements, and switch statements. Updated March 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cplusplus Copy

Cable Management

Cable Management Cable management is the organization of cables connected to electrical devices. This includes power cables, network cables, audio/video cables, and many others. Managing cables is a key aspect of a clean and safe home or work environment. Electrical devices often require multiple cables. For example, a desktop computer may require a power cord, Ethernet cable, and multiple USB cables for peripheral devices. A home theater system may include multiple power cables, as well as A/V cables for HDMI and analog audio connections. The resulting pile of cables, if not managed properly, can easily become a tangled mess. Cable management offers several benefits: It provides a cleaner workspace or living area. It makes it easier to add or remove devices and cables. It protects cables from crimping or other damage. It prevents cables from disconnecting unexpectedly. It limits interference from other cables. Cable management may be as simple as a tying a few cables together with adjustable cable clips. This type of cable management is often all that is required for a home computer setup. In some cases, a cable sleeve or a cable cover below a desk may help move cables out of the way. Managing cables can also be a complex process. For example, network engineers may need to organize hundreds of network cables from a server rack that connect to power outlets or network switches. This type of organization requires choosing the right cable length and correct cable management sleeves so that cables can be easily added or removed. Regardless of the application, cable management is an important part of any electronic setup. Simple cable management accessories, such as cable ties and cable sleeves, can turn a tangled pile of cables into a clean and organized space. Updated September 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cable_management Copy

Cable Modem

Cable Modem A cable modem is a peripheral device used to connect to the Internet. It operates over coax cable TV lines and provides high-speed Internet access. Since cable modems offer an always-on connection and fast data transfer rates, they are considered broadband devices. Dial-up modems, which were popular in the early years of the Internet, offered speeds close to 56 Kbps over analog telephone lines. Eventually, DSL and cable modems replaced dial-up modems since they offered much faster speeds. Early cable modems provided download and upload speeds of 1 to 3 Mbps, 20 to 60 times faster than the fastest dial-up modems. Today, standard cable Internet access speeds range from 25 to 50 Mbps. On the high end, Comcast offers an "Xfinity Extreme" service with speeds up to 505 Mbps. Most cable modems include a standard RJ45 port that connects to the Ethernet port on your computer or router. Since most homes now have several Internet-enabled devices, cable modems are typically connected to a home router, allowing multiple devices to access the Internet. Some cable modems even include a built-in wireless router, eliminating the need for a second device. NOTE: While "cable modem" includes the word "modem," it does not function as a traditional modem (which is short for "modulator/demodulator"). Cable modems send and receive information digitally, so there is no need to modulate an analog signal. Updated March 29, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cable_modem Copy

Cache

Cache Cache, which is pronounced "cash" (not "catch" or "cashay"), stores recently used information so that it can be quickly accessed at a later time. Computers incorporate several different types of caching in order to run more efficiently, thereby improving performance. Common types of caches include browser cache, disk cache, memory cache, and processor cache. Browser cache - Most web browsers cache webpage data by default. For example, when you visit a webpage, the browser may cache the HTML, images, and any CSS or JavaScript files referenced by the page. When you browse through other pages on the site that use the same images, CSS, or JavaScript, your browser will not have to re-download the files. Instead, the browser can simply load them from the cache, which is stored on your local hard drive. Memory cache - When an application is running, it may cache certain data in the system memory, or RAM. For example, if you are working on a video project, the video editor may load specific video clips and audio tracks from the hard drive into RAM. Since RAM can be accessed much more quickly than a hard drive, this reduces lag when importing and editing files. Disk cache - Most HDDs and SSDs include a small amount of RAM that serves as a disk cache. A typical disk cache for a 1 terabyte hard drive is 32 megabytes, while a 2 TB hard drive may have a 64 MB cache. This small amount of RAM can make a big difference in the drive's performance. For example, when you open a folder with a large number of files, the references to the files may be automatically saved in the disk cache. The next time you open the folder, the list of files may load instantly instead of taking several seconds to appears. Processor cache - Processor caches are even smaller than disk caches. This is because a processor cache contains tiny blocks of data, such as frequently used instructions, that can be accessed quickly by the CPU. Modern processors often contain an L1 cache that is right next to the processor and an L2 cache that is slightly further away. The L1 cache is the smallest (around 64 KB), while the L2 cache may be around 2 MB in size. Some high-end processors even include an L3 cache, which is larger than the L2 cache. When a processor accesses data from a higher-level cache, it may also move the data to the lower-level cache for faster access next time. Most caching is done in the background, so you won't even notice it is happening. In fact, the only one of the above caches that you can control is the browser cache. You can open your browser preferences to view the cache settings and alter the size of your browser cache or empty the cache if needed. Updated June 24, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cache Copy

CAD

CAD Stands for "Computer-Aided Design." CAD is the use of computers to create 2D and 3D designs. Common types of CAD include two-dimensional layout design and three-dimensional modeling. 2D CAD has many applications, but it is commonly used to design vector-based layouts. For example, architects may use CAD software to create overhead views of building floor plans and outdoor landscapes. These layouts, which contain vector graphics, can be scaled to different sizes, which may be used for proposals or blueprints. 2D CAD also includes drawings, such as sketches and mockups, which are common at the beginning of the design process. 3D CAD is commonly used in developing video games and animated films. It also has several real-world applications, such as product design, civil engineering, and simulation modeling. 3D CAD includes computer-aided manufacturing (CAM), which involves the actual manufacturing of three-dimensional objects. Like 2D CAD drawings, 3D models are typically vector-based, but the vectors include three dimensions, rather than two. This allows designers to create complex 3D shapes that can be moved, rotated, enlarged, and modified. Some 3D models are created exclusively of polygons, while others may include Bézier curves and other rounded surfaces. When creating a 3D model, a CAD designer may first construct the basic shape of the object, or "wireframe." Once the shape is complete, surfaces can be added that may include colors, gradients, or designs that can be applied using a process called texture mapping. Many CAD programs include the ability to adjust lighting, which affects the shadows and reflections of the object. Some programs also include a timeline that can be used to create 3D animations. Updated April 4, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cad Copy

CamelCase

CamelCase camelCase is a naming convention in which the first letter of each word in a compound word is capitalized, except for the first word. Software developers often use camelCase when writing source code. camelCase is useful in programming since element names cannot contain spaces. The camelCase naming convention makes compound names more readable. For example, myOneMethod is easier to read than myonemethod. Other examples of camelCase are listed below: newString; getNewString() myVariableName; The name camelCase (also "camel case" or "dromedary case") comes from the hump on a camel, which is represented by the capital letter in the middle of the compound word. A camelCase word may have one or more capital letters. camelCase vs PascalCase camelCase is similar to PascalCase, which capitalizes the first letters of all words in a compound word, including the first word. For example, myFunction() in PascalCase would be MyFunction(). Programmers may choose to use either font case when writing source code, since it does not affect the syntax. While each developer can choose his or her preferred style, some programming languages have standard naming conventions. For example, in Java, the following cases are recommended: Classes: PascalCase — class VectorImage {} Methods: camelCase — drawImage() Variables: camelCase — string newImageName NOTE: PascalCase is sometimes called "UpperCamelCase," while standard camelCase may be specified as "lowerCamelCase." In recent years, developers have moved away from these terms and use PascalCase and camelCase instead. Updated October 1, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/camelcase Copy

Camera RAW

Camera RAW A camera RAW image is an unprocessed photograph captured with a digital camera. It contains the raw image data captured by the camera's sensor (or CCD), saved in proprietary file format specific to the camera manufacturer. By default, most digital cameras process and compress photos as JPEG files immediately after capturing the image. The processing step automatically applies the appropriate color correction and the JPEG compression significantly reduces the file size. The result is an efficiently processed JPEG image. While JPEG images are suitable for most purposes, professional photographers and photography enthusiasts prefer to control over how each image is processed. Therefore, many high-end cameras have the ability to shoot in RAW mode instead of JPEG. The raw files are unprocessed, allowing the photographer to adjust settings like exposure, white balance, and saturation after the image has been captured. Instead of applying lossy JPEG compression, which reduces the image quality, RAW mode saves files either in a losslessly compressed format or in an uncompressed format. Because they use less compression, Camera RAW files take up more space than typical JPEG images. In fact, RAW files often require 5 to 10 times more disk space for each image captured. So you'll want to have a large memory card in your camera if you plan on shooting in RAW mode. Viewing Camera RAW Files Since Camera RAW files are not saved in a standard image format, image viewing programs may not recognize them. Even if a program supports the Canon RAW format, it may not recognize a RAW file saved by a newly released Canon camera. Therefore, camera manufacturers often include proprietary RAW photo editing software with their high-end cameras. Additionally, popular operating systems and software programs are regularly updated with support for new RAW formats. Common Camera RAW File Extensions .ARW (Sony) .CR2 (Canon) .DCR (Kodak) .DNG (Adobe) .ERF (Epson) .NEF (Nikon) .ORF (Olympus) .PEF (Pentax) .RAF (Fuji) .RW2 (Panasonic) Updated December 10, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/camera_raw Copy

CAN

CAN Stands for "Campus Area Network." A CAN is a network that covers an educational or corporate campus. Examples include elementary schools, university campuses, and corporate buildings. A campus area network is larger than a local area network LAN since it may span multiple buildings within a specific area. Most CANs are comprised of several LANs connected via switches and routers that combine to create a single network. They operate similar to LANs, in that users with access to the network (wired or wireless) can communicate directly with other systems within the network. A college or university CAN may also be called a "residential network" or "ResNet" since it can only be accessed by campus residents, such as students and faculty. The two primary benefits of a CAN are security and speed. Security Unlike a wide area network (WAN), a CAN is managed and maintained by a single entity, such as the campus IT team. The network administrators can monitor, allow, and limit access to the network. Firewalls are typically placed between the CAN and the Internet to protect the network from unauthorized access. A firewall or proxy server may also be used to limit the websites or Internet ports users can access. Speed Since communication within a CAN takes place over a local network, data transfer speeds between systems within the network are often higher than typical Internet speeds. This makes it easy to share large files with other users on the network. For example, it may take several hours to upload a long video to a colleague over the Internet, but the transfer may only take a few minutes over a CAN. NOTE: CAN may also stand for "Corporate Area Network" or "Controller Area Network." Updated November 23, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/can Copy

Caps Lock

Caps Lock Caps Lock is a feature on most keyboards that, while enabled, capitalizes each letter character typed on the keyboard. It allows you to type as if you were holding down the Shift key, but it only affects letters — numbers and punctuation keys are not affected. Pressing the Caps Lock key on the keyboard toggles it on, and it remains active until you press the Caps Lock key again. Pressing the Shift key while typing a letter allows you to enter a lowercase letter with Caps Lock enabled. The Caps Lock key is a toggle key found on almost all keyboards designed for Windows, macOS, Unix, and Linux — Chromebook keyboards lack a dedicated Caps Lock key and instead use a keyboard shortcut to toggle it. In most layouts, the Caps Lock key is found on the left side of the keyboard between the Tab and Shift keys. Many keyboards include an LED light on or near the key that lights up when Caps Lock is active, and some operating systems also show an on-screen indicator. The Caps Lock key on a MacBook Air keyboard Touchscreen keyboards on smartphones omit the Caps Lock key to save screen space. Instead, you can enable Caps Lock on iOS and Android devices by tapping the Shift key twice. To turn Caps Lock off, tap the Shift key again. NOTE: Since passwords are case-sensitive, and password fields obscure your password while you're typing it, Caps Lock can cause you to enter an incorrect password if it's accidentally enabled. Updated November 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/caps_lock Copy

Captcha

Captcha A captcha is program used to verify that a human, rather than a computer, is entering data. Captchas are commonly seen at the end of online forms and ask the user to enter text from a distorted image. The text in the image may be wavy, have lines through it, or may be highly irregular, making it nearly impossible for an automated program to recognize it. (Of course, some captchas are so distorted that they can be difficult for humans to recognize as well.) Fortunately, most captchas allow the user to regenerate the image if the text is too difficult to read. Some even include an auditory pronunciation feature. By requiring a captcha response, webmasters can prevent automated programs, or "bots," from filling out forms online. This prevents spam from being sent through website forms and ensures that wikis, such as Wikipedia, are only edited by humans. Captchas are also used by websites such as Ticketmaster.com to make sure users don't bog down the server with repeated requests. While captchas may be a minor inconvenience to the user, they can save webmasters a lot of hassle by fending off automated programs. The name "captcha" comes from the word "capture," since it captures human responses. It may also be written "CAPTCHA," which is an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart." Updated May 8, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/captcha Copy

Card Reader

Card Reader A card reader is a hardware device that reads and writes various flash memory card formats. Some computers (particularly laptops) may include built-in card readers, but they are more common on other devices like printers, scanners, digital cameras, and audio recorders. Computers without a built-in card reader can use external USB-connected card readers. An external card reader that uses USB-C can even connect to smartphones and tablets to allow them to transfer data to and from memory cards. External card readers often include multiple slots, each designed for a different type of memory card. SD and microSD cards are the most widely used, but other formats like Compact Flash, XQD, CFexpress, and Memory Stick cards are also commonly found on multi-card readers. A Lexar multi-card reader with slots for SD, CF, and Memory Stick cards Inserting a card into a computer's card reader automatically mounts that card's contents. Once mounted, it appears like any other disk connected to the computer and is accessible through File Explorer (on Windows) or the Finder (on macOS). Since most digital cameras use memory cards to save and transfer digital photos, your computer may even automatically launch photo management software to import photos when a memory card is connected. Like with any other type of removable disk, memory cards in a card reader should be unmounted (or "ejected") before you remove the card; this stops all read-and-write activity and prevents corruption of the card's data. NOTE: Since Windows automatically mounts memory cards once connected, a memory card containing malware could infect your computer. You should never connect an unfamiliar memory card to your computer if you're unsure of its contents. Updated November 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/card_reader Copy

Case-Insensitive

Case-Insensitive If something is case-insensitive, it does not distinguish between uppercase and lowercase characters. It is the opposite of case-sensitive, in which the case of each letter matters. Most search engines are case insensitive, meaning the case of the keywords does matter. For example, searching for "Microsoft Word" and "microsoft word" will return the same results. Case insensitive functions are able to match strings regardless of letter case. Usernames are also typically case-insensitive. For instance, when logging in to a website, the username field of the login is not case-sensitive. The password field is. So, while capital letters usually don't matter when entering your username, they are important when entering your password. Sorting Text Sorting algorithms are usually case-insensitive, meaning they treat "A" the same as "a." However, some functions use the ASCII value of each character, in which all uppercase letters come before all lowercase letters. In an ASCII sort, C comes after A, but before b. When sorting text values in a database, the sort order is determined by collation, a rule for sorting the data. The default collation is typically case-insensitive and based on the text's character encoding, which assigns a numeric value to each character. MySQL and other database engines support multiple collations, such as latin1_general_cs (case-sensitive) and latin1_general_ci (case-insensitive). While it is rare to use a case-sensitive — or "cs" — collation, it is possible to override the default "ci" collation if necessary. Updated August 20, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/case-insensitive Copy

Case-Sensitive

Case-Sensitive Case-sensitive means distinguishing between lowercase and uppercase characters. It is the opposite of case-insensitive, where the case of each letter is irrelevant. Case sensitivity has several applications in computer science, including matching and sorting textual data. 1. Matching A text-based search may be either case-sensitive or case-insensitive. If a search function is case-sensitive, searching for "example" will not match "Example" and vice versa. If the function is case-insensitive, the two terms will match. Most search engines and search functions are case-insensitive, meaning uppercase and lowercase letters do not matter in the search keywords. Case sensitivity also applies to authentication. When logging in to an app or website, the password is always case-sensitive. In other words, the login will fail if you enter "testpass" when the correct password is "TestPass". In most situations, the username is case-insensitive, though some logins require a username with the proper case. 2. Sorting Like searching, most sorting algorithms are case-insensitive. However, some programs support case-sensitive sorting, meaning the letter case affects the order of the results. If a case-sensitive sorting algorithm uses each character's ASCII value, then all capital letters come before all lowercase characters. In an ASCII-based sort, T comes before t, and also before a, b, c, etc. Other case-sensitive algorithms may only prioritize capital letters before their lowercase counterparts. So T would come before t, but after a, b, c, etc. Updated August 13, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/case-sensitive Copy

Cat 5

Cat 5 Cat 5 (short for "Category 5") is a type of Ethernet cable. It supports data transfer rates of up to 100 Mbps, or 12.5 megabytes per second. Cat 5 cables operate at a 100 MHz frequency and can extend up to 100 meters between connected devices. Each Cat 5 cable has eight internal wires, or four "twisted pairs." These pairs, which are color-coded, may be wired two different ways into the RJ45 connector. A Cat 5 cable with the same wiring on each end is a "patch cable," while one that is wired differently on each end is a "crossover cable." Category 5 Ethernet cables were commonly used for creating local area networks in the late 1990s and early 2000s. Since they supported 100 Mbps transfer rates, they were sufficient for 100BASE-T "Fast Ethernet" networks. In 2001, Cat 5e cables, which support 1000BASE-T (Gigabit Ethernet), began to replace Cat 5 cables. Today, most wired networks use Cat 5e cables and above. The low maximum data transfer rate of Cat 5 would create a bottleneck in most modern networks. While Cat 5 cables are only rated for 100 Mbps networks, many Cat 5 cables support data transfer rates over 100 Mbps. Some may even support speeds up to 1,000 Mbps. NOTE: "Cat 5" is shorthand for "category 5." Since there is no standard abbreviation, "Cat5" and "Cat-5" are also acceptable. Updated December 16, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cat_5 Copy

Cat 5e

Cat 5e Cat 5e (short for "Category 5 Enhanced") is a type of Ethernet cable. A Cat 5e cable has the same appearance and wiring scheme as a Cat 5 cable, but supports data transfer rates of 1,000 Mbps or one gigabit per second. Therefore, Cat 5e cables are sometimes called Gigabit Ethernet cables. Cat 5e cables operate on the same 100 MHz frequency as Cat 5 and have the same maximum length of 100 meters before requiring a router or powered signal booster. The primary difference between the two standards is that Cat 5 is only rated for 100 Mbps networks, while Cat 5e supports 1,000 Mbps networks. While the cables are nearly identical, Cat 5e cables have improved shielding that reduces crosstalk, or interference from adjacent wire pairs. Technically speaking, Cat 5e cables must adhere to stricter IEEE standards in regards to Near-End Crosstalk (NEXT) measurements, which are measured in decibels. The difference between Cat 5 and Cat 5e shielding is specific to the internal wires, not the outer sleeve, or "jacket." An Ethernet cable's outer jacket (designed for indoor, outdoor, in-wall, etc) has no bearing on the IEEE rating. In 2001, Cat 5e began to replace Cat 5 as the most widely used Ethernet cable standard. Today, nearly all wired networks use Cat 5e cables or higher. Updated December 31, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cat_5e Copy

Cat 6

Cat 6 Cat 6 (short for "Category 6") is an Ethernet standard. It operates at 250 MHz and supports data transfer rates of 10 Gbps. The maximum speed of Cat 6 is 10x faster than the previous standard, Cat 5e. From the outside, Cat 6 cables look identical to Cat 5 and Cat 5e cables. They use the same RJ45 connectors and are backward-compatible with Cat 5/5e Ethernet networks. Internally, Cat 6 cables retain the same "twisted pair" wiring scheme with eight internal wires. The primary difference between Cat 6 and Cat 5 is that Cat 6 operates at 250 MHz, while Cat 5 and 5e operate at 100 MHz. The higher frequency enables Cat 6 cables to transfer data more quickly. Cat 6 vs Cat 6a Similar to Cat 5, Cat 6 has a variant with performance improvements. The variant is called Cat 6a (rather than Cat 6e). Cat 6a operates at 500 MHz, twice the frequency of Cat 6, but supports the same maximum data transfer rate of 10 Gbps. The difference is that Cat 6 can only transmit data at 10 Gbps for 55 meters (180 feet), while Cat 6a supports 10 Gbps for 100 meters (328 feet). A Cat 6 cable will still work over 55 meters, but the speed may decline. In order for Ethernet cables to transfer data at full speed over long distances, a router or powered signal booster must be placed every 100 meters. While Cat 5e was the most popular Ethernet standard for nearly two decades, many modern LANs now use Cat 6 or Cat 6a cables. Even if the network equipment does not support 10 Gbps transfer speeds, Cat 6 cable is necessary to transfer data over 1 Gbps. Cat 6 cables are usually more expensive than Cat 5/5e cables, but re-running new Ethernet lines can be time-consuming and costly. Therefore, it is typically worth the extra expense to future-proof a network with faster Ethernet cables. NOTE: Most Ethernet cables have the type of standard printed on the outer jacket. Updated January 9, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cat_6 Copy

Catalina

Catalina Catalina is the sixteenth version of Apple's macOS operating system and is also known as "macOS 10.15." It follows macOS Mojave (version 10.14) and was released on October 7, 2019. While Catalina includes several new features, it is primarily a performance update. Notably, Catalina is the first version of macOS to drop 32-bit support. By removing support for 32-bit applications, Apple trimmed a significant amount of "legacy" code from the OS. The result is a lighter-weight operating system optimized for 64-bit applications. While most modern apps are 64-bit, older 32-bit apps will not run in macOS 10.15. Catalina is also the first version of macOS that allows an iPad to serve as a secondary display. This feature, called "Sidecar," works with both a wired and wireless connection between a Mac and an iPad. It also introduces "Project Catalyst," which makes it possible for iOS and iPadOS apps to run on macOS. Most of Catalina's other new features are related to the bundled apps. For example, iTunes is replaced by three new apps: Music, Podcast, and TV. Mail adds message blocking and built-in unsubscribing. The Reminders app provides a new way to organize and manage reminders. Catalina also includes Screen Time, a feature introduced in iOS 12. It allows users to track how much time they spend on their computers, including how much they use individual apps. Updated May 13, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/catalina Copy

Cc

Cc Stands for "Carbon Copy." Cc is one of the standard email address fields. It is meant for secondary recipients of a message, while the To field is for its primary recipients. All email addresses in a message's Cc field receive a copy of the message, and will also receive any replies sent using the Reply All command. The term comes from the historic use of carbon paper to create copies of written documents and forms. Standard email courtesy reserves the Cc field for secondary recipients of an email message who benefit from being informed of the message's contents but whose input is not necessary. The To field, on the other hand, is for the primary participants in the exchange. For example, two coworkers discussing how to split up tasks in a project would exchange emails with each other's address in the To field; adding their manager's email address to the Cc field will keep them in the loop. In effect, the Cc field is an 'FYI' field. An email message with a recipient in the Cc field Every recipient of an email can see the addresses in both the To and Cc fields. The Blind Carbon Copy (Bcc) address field is for situations where a sender wants to hide a recipient's email address from other recipients. The Bcc field can keep a single recipient secret or prevent recipients of a mass email from knowing everyone else's email address. Addresses in the Bcc field will also not receive replies if another recipient sends a Reply All message. Updated December 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cc Copy

CCD

CCD Stands for "Charged Coupled Device." A CCD is a type of image sensor used to capture still and moving imagery in digital cameras and scientific instruments. A CCD sensor is a flat integrated circuit that contains an array of light-sensitive elements, called pixels, that capture incoming light and turn it into electrical signals. Other components in the device use these signals to construct a digital recreation of the original image. The quality of the image produced by a CCD depends on its resolution — the number of physical pixels present on the sensor. A sensor's resolution is measured in Megapixels (the total number of pixels in millions). For example, a 16 Megapixel sensor produces an image with twice as many total pixels as an 8 Megapixel sensor, resulting in a more detailed image. A physically-larger sensor can either pack in more pixels than a smaller one, or include larger pixels that are more sensitive to light. CCD sensors can also detect light outside the visible spectrum, enabling infrared photography and night-vision video recording. Since CCDs only capture the amount of light that hits the sensor, not the color of that light, an unfiltered sensor only produces a monochrome image. CCDs can use red, green, and blue (RGB) filters to capture colored light separately before combining the data into a full-color image. Some devices even include three separate CCDs, one each for red, blue, and green, to capture full-color image data in less time. While many early digital cameras used CCDs to capture images, other types of image sensors came along that were faster and less prone to overexposure. Eventually, most consumer- and professional-level digital cameras switched to CMOS image sensors. However, CCDs produce higher-quality images when overexposure is not a concern, so they're still used in scientific and medical instruments. They're even used in the harsh environment of space, taking photos of far-off galaxies from orbiting satellite telescopes. Updated December 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ccd Copy

CD

CD Stands for "Compact Disc." A CD is a type of optical media for storing digital audio and data. CDs are circular discs, 120 mm (4.7 in) in diameter and 1.2 mm (0.047 in) thick. The underside of a CD consists of protective plastic over a reflective layer containing digital data, encoded as a series of pits and lands read by a laser as the disc spins in an optical drive. A standard CD can store 700 MB of data or up to 80 minutes of digital audio. CDs were first introduced in 1982 by Sony and Philips as a new music format. Unlike analog mediums like vinyl records and audio cassettes, you can listen to a CD an unlimited number of times without any loss of quality due to wear. CDs also separate audio into distinct audio tracks that you can jump directly to or skip over. All audio CDs use the same audio encoding format, known as the Red Book specification — two-channel (stereo) 16-bit PCM audio sampled at 44.1 kHz. Later in the 1980s, the CD was adapted to store computer data on CD-ROM discs. Since a single CD-ROM disc could contain as much data as several hundred floppy disks, they were an ideal format for distributing multimedia software and games. Recordable (CD-R) and rewritable (CD-RW) discs eventually allowed home computer users to burn their own CDs to share files, make custom music mixes, and back up their computers. NOTE: Later optical disc formats like DVD and Blu-Ray use the same dimensions as CDs but with much smaller pits and lands that store data more densely. This allows optical drives designed for DVDs and Blu-Ray discs to read and play CDs. Updated April 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cd Copy

CD-R

CD-R Stands for "Compact Disc-Recordable." CD-R is an optical media storage medium that can hold up to 700 MB of data. The discs are used for backing up data, sharing multimedia, and general data storage. Unlike traditional CDs that are factory-pressed with pre-recorded data, CD-Rs allow users to write data onto them once, making them a convenient and cost-effective archival solution. A CD-R disc is coated with a thin layer of dye and a reflective layer. When data is written, or "burned," onto a CD-R, a laser beam heats the dye, creating permanent physical changes that form pits (non-reflective) and lands (reflective) on the disc's surface. The pits and lands represent binary zeros and ones, which an optical drive can read as digital data. Top and bottom of a CD-R disc One key advantage of CD-Rs is their compatibility with a wide range of devices, including CD-ROM drives, CD players, and standalone CD burners. You can use CD-Rs to create music CDs, software installation discs, sharable photo albums, and data backups. Once data is written onto a CD-R, it cannot be erased or overwritten. Additionally, the quality of CD-Rs may degrade over time due to light exposure, heat, and humidity, potentially leading to data loss or corruption. Today, most PCs do not have optical drives, and CD-Rs are much less common than in the early 2000s. Newer data storage mediums such as flash drives and external hard drives read and write data faster and have much higher capacity. Updated June 7, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cdr Copy

CD-ROM

CD-ROM Stands for "Compact Disc Read-Only Memory." CD-ROM is a read-only optical media format that uses CDs to store data. A computer with an optical drive can read, but not write, data on a CD-ROM. A single disc can store up to 700 MB of data. Some CD-ROM discs, known as "Enhanced CDs," store a combination of audio tracks and a data track. CD-ROM discs were often used to distribute software and video games in the 1990s and 2000s before eventually being replaced by higher-capacity DVDs. CD-ROM discs and audio CDs are physically identical, storing and retrieving data in the same method. Digital data is encoded and stored as a series of pits on a flat reflective layer on the bottom of the disc. The laser in the optical drive reads these pits, then decodes the data. The only difference between the two is how the data is formatted — multiple PCM audio tracks on an audio CD, or a single track containing data on a file system on a CD-ROM. An optical drive can spin a CD-ROM disc at a significantly higher speed than an audio CD player. A CD spinning at the standard rate for audio playback supports a data transfer rate of approximately 150 KB/s; at this speed, reading an entire CD-ROM full of data takes almost 80 minutes. CD-ROM drives are classified by how many times faster they can spin, with many drives capable of 40x (6,144 KB/s) or more. Updated October 25, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cdrom Copy

CD-RW

CD-RW Stands for "Compact Disc Re-Writable." CD-RW is the rewritable compact disc format. A rewritable CD can be written to, erased, and re-written hundreds of times. One disc can store up to 700 MB of data or 80 minutes of digital audio. Rewritable discs may not play correctly in all CD players, as the rewritable layer on the underside of the disc is not as reflective as the pressed layer on a CD-ROM. Writing to a CD-RW disc requires a compatible disc drive that supports rewriting optical media. Once data is written to a CD-RW disc, it is read-only and cannot be edited; in order to to make any changes to the files on a disc, it must be erased and re-written. This process makes the format a poor choice for storing files that need to be frequently modified. Instead, the rewritable discs are better suited for frequent data backups, although the format's small capacity compared to DVD±RW discs limits that use as well. An optical disc drive that can write discs has three lasers—a low-power laser that reads data, a medium-power laser that erases rewritable discs, and a high-power laser that burns data to a disc. A CD-RW drive burns a series of spots onto a reflective layer of a phase change metal alloy on the underside of the disc. At first, this metal alloy exists in a crystalline form that lets light through. A high-power burning laser heats the alloy past its melting point, which then turns opaque as it cools. These opaque spots encode digital data that the reading laser in the drive reads and decodes. When it's time to erase a disc, the drive uses the medium-powered erase laser to heat the alloy layer to its crystallization point. Updated November 4, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cdrw Copy

CDFS

CDFS Stands for "Compact Disc File System." CDFS is a file system used for storing data on CDs. It is a standard published by the International Organization for Standardization (ISO) and is also known as "ISO 9660." Discs that store data using the ISO 9660 standard can be recognized by multiple platforms, including Windows, Macintosh, and Linux systems. CDFS specifies several disc properties, including the volume attributes, file attributes, and file placement. It also specifies the overall data structure of a CD, such as the header size and the data storage area of the disc. While CDFS was originally designed for read-only single-session discs, an extension of the standard allows multiple-session writing to CD-R discs. This means multiple volumes may be stored on a single CD. The CDFS standard is useful for burning discs that will be shared between multiple computers. Because CDFS is not specific to a single operating system, a disc burned on a Macintosh using the compact disk file system can be read on a Windows or Linux-based computer. Disc images can also be saved using the CDFS standard, which may be used to burn ISO 9660 discs. These files are typically saved with an .ISO file extension. Updated January 20, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cdfs Copy

CDMA

CDMA Stands for "Code-Division Multiple Access." CDMA is a wireless transmission technology that allows multiple digital signals to travel over the same carrier channel. Qualcomm developed and patented CDMA as a commercial cellular networking method in the 1990s, and carriers first deployed networks using the technology in 1995. CDMA was one of two competing standards for 2G and 3G cellular data networks, along with GSM. A CDMA network assigns every user a code that encodes and decodes wireless signals sent to and from their device. Multiple users can communicate with the network's transmission towers using the entire width of the frequency spectrum. Each device decodes only the signals that use their personal code, with signals meant for other users appearing as noise that the receiver can safely discard. By not limiting each user's frequency range, CDMA provides more bandwidth when compared to the TDMA method used by GSM networks. CDMA also allows more simultaneous users to use the same frequency spectrum without negatively affecting network performance. Globally, CDMA networks were less common than GSM. CDMA was only widely used in the United States (primarily through Verizon) and parts of Asia. The two types of networks were incompatible, as they used different frequency ranges and thus required different radio hardware; however, during the later part of the 3G era it was common for smartphones to include radios for both. Both GSM and CDMA networks were gradually replaced during the 2010s by LTE networks. By the end of 2022, most carriers had retired their 2G and 3G networks entirely. Updated July 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cdma Copy

CDN

CDN Stands for "Content Delivery Network." A CDN is a group of servers distributed in different locations. Small CDNs may be located within a single country, while large CDNs are spread across data centers around the world. CDNs are used to provide content to users in different locations as quickly as possible. For example, a user in San Francisco may receive website content from a server in Los Angeles, while a user in England may receive the same content from a server in London. This is accomplished using data replication, which stores the same data on multiple servers. Whenever you access a website hosted on a CDN, the network will intelligently provide you with the content using the server closest to the your geographical location. By providing Internet-based content over a CDN, large businesses can avoid bottlenecks associated with serving data from a single location. It also helps limit the impact of security breaches, such as denial of service attacks. If hardware fails on one server, the CDN can quickly reroute traffic to the next best server, limiting or even eliminating downtime. While CDNs are often used to host websites, they are commonly used to provide other types of downloadable data as well. Examples include software programs, images, videos, and streaming media. Cloud computing services are often provided over content delivery networks as well. Since CDNs automatically choose the best server for each user, there is no need to manually choose the most efficient location, like some FTP download services require. NOTE: While CDNs are often accessed via standard URLs, many CDNs include the letters "cdn" in their web address. Updated September 13, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cdn Copy

Cell

Cell Spreadsheets are made up of rows and columns, which form a table or grid. A cell is a specific location within a spreadsheet and is defined by the intersection of a row and column. Since most spreadsheets use numbers to define rows and letters to define columns, cells are often referenced by a letter and number combination. For example, the cell A3 is located in the first column (A), in the third row of a spreadsheet. The cell B3 would be immediately to the right of A3 and the cell A4 would be directly below it. These examples are relative cell references, since they are based on the actual names of the cells. If a user changed the name of column A to "Income" and the name of row 3 to "Q3," the reference would be "IncomeQ3" instead of A3. However, you could also refer to the cell by the absolute cell reference, which is defined by $A$3. Many spreadsheet programs support absolute cell references, which are useful when creating formulas. Cells may contain several data types, including numbers, dates, times, currencies, percentages, text, and other types of data. Most spreadsheet programs allow you to format cells for different data types, which helps ensure uniform entries across multiple cells. For example, you could format the active cell (or currently selected cell) to US dollar currency. If you enter "123" in the cell, it may automatically change to "$123.00" when you press Tab or Enter or click outside the cell. If the active cell is formatted for a date, the text "1/2/3" may change to "01/02/2003." Cells may contain manually entered data or the results of calculations from other cells. For example, cell A20 may contain a formula that produces the result of the summation of cells A1-A15. Cell C5 may contain a formula that averages all the numbers in the B column. The data in spreadsheet cells that contain formulas are dynamically updated when the data in referenced cells are changed. Updated November 12, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cell Copy

Cell Reference

Cell Reference A cell reference, or cell address, is an alphanumeric value used to identify a specific cell in a spreadsheet. Each cell reference contains one or more letters followed by a number. The letter or letters identify the column and the number represents the row. In a standard spreadsheet, the first column is A, the second column is B, the third column is C, etc. These letters are typically displayed in the column headers at the top of the spreadsheet. If there are more than 26 columns, the 26th column is labeled Z, followed by AA for column 27, AB for column 28, AC for column 29, etc. Column 55 is labeled BA. Rows simply increment numerically from top to bottom starting with "1" for the first row. Examples of cell references are listed below: First column, seventh row: A7 Tenth column, twentieth row: J20 Sixty-first column, three hundred forty-second row: BI342 One thousand column, two thousandth row: ALL2000 Cell references are helpful in two ways: 1) They provide an easy way to locate a specific value within a spreadsheet, and 2) they are used in creating formulas. Locating Values If you are reviewing a spreadsheet with another user, you can simply use the column/row combination to reference a specific cell. You can also use the "Go To..." feature to jump to a specific cell. This is especially helpful when working with large spreadsheets that have hundreds or thousands of cells. Creating Formulas Most spreadsheet programs support formulas that can be used to calculate values based on the contents of other cells. For example, a cell may contain the function: =D8/E10 The above function will automatically populate the cell with the value stored in cell D8 divided by the value in E10. If G2 contains the function above, and D8 is 20 and E10 is 5, then G2 will display the value 4. A cell may also simply display the value of another cell. For instance, if G3 contained the function =D8, it would display 20 (the value stored in D8). Updated November 15, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cell_reference Copy

Certificate

Certificate An SSL certificate, or digital certificate, is a file installed on a secure web server that identifies a website. A digital certificate establishes the identity and authenticity of the company that runs a website so that visitors can trust that the website is secure and reliable. In order to verify that these sites are legitimate (that the owners are who they say they are), the companies and their websites are verified by a third-party certificate authority, such as IdenTrust or DigiCert. Once the certificate authority establishes the legitimacy of an organization and that they run the associated website, it will issue an SSL certificate. The cost for a certificate varies, depending on the level of support provided — some certificates are issued for free, but many cost around $60-$100 per year (or more). This digital certificate is installed on the web server and will be viewable when a user visits the website. Sites with a valid certificate load using HTTPS, and display a padlock icon next to the URL in the address bar. To view the certificate, click the padlock icon. Because digital certificates verify a company's current status, they do not last forever. Most SSL certificates expire after a year, with free certificates expiring after three months. If the certificate is not renewed in time, a site's visitors will see a warning in their web browser before the page loads, informing them that "This website's certificate has expired." While this does not necessarily mean the site is fraudulent, it does show that the site's administrators allowed their certificate to expire before renewing it. NOTE: Even though they are called SSL certificates by certificate authorities and website administrators, they now use the TLS security protocol. TLS replaced SSL in 1998, and SSL was eventually deprecated in 2015. Updated March 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/certificate Copy

CGI

CGI CGI has two different meanings: 1) Common Gateway Interface, and 2) Computer Generated Imagery. 1) Common Gateway Interface The Common Gateway Interface (CGI) is a set of rules for running scripts and programs on a web server. It specifies what information is communicated between the web server and clients' Web browsers and how the information is transmitted. Most Web servers include a cgi-bin directory in the root folder of each website on the server. Any scripts placed in this directory must follow the rules of the Common Gateway Interface. For example, scripts located in the cgi-bin directory may be given executable permissions, while files outside the directory may not be allowed to be executed. A CGI script may also request CGI environment variables, such as SERVER_PROTOCOL and REMOTE_HOST, which may be used as input variables for the script. Since CGI is a standard interface, it can be used on multiple types of hardware platforms and is supported by several types Web server software, such as Apache and Windows Server. CGI scripts and programs can also be written in several different languages, such as C++, Java, and Perl. While many websites continue to use CGI for running programs and scripts, developers now often include scripts directly within Web pages. These scripts, which are written in languages such as PHP and ASP, are processed on the server before the page is loaded, and the resulting data is sent to the user's browser. 2) Computer Generated Imagery In the computer graphics world, CGI typically refers to Computer Generated Imagery. This type of CGI refers to 3D graphics used in film, TV, and other types of visual media. Most modern action films include at least some CGI for special effects, while other movies, such as Pixar animated films, are built completely from computer-generated graphics. Updated June 24, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cgi Copy

Character

Character A character is any letter, number, space, punctuation mark, or symbol typed or entered on a computer. Some invisible characters are not represented on screen but are still present to modify or format text, like a tab or carriage return. When using the ASCII character set, a single character takes up one byte of storage space. For example, the phrase "Good afternoon!" is 15 characters and would take up 15 bytes of storage. Characters on a computer, like all other data, are stored as binary data. The particular sequence of bits that make up a single character needs to be defined by a character encoding method. The two most common encoding methods are ASCII and Unicode. ASCII stores a character as a single byte, supporting 128 different letters, numbers, punctuation, and symbols in the Latin alphabet. Unicode allows a single character to be defined by one to four bytes apiece—one byte for the Latin alphabet, numbers, and punctuation; two bytes for other alphabets like Greek, Cyrillic, and Hebrew; three bytes for Chinese, Japanese, and Korean characters; and four bytes for mathematical symbols and emoji. In addition to the vast array of letters, numbers, and symbols across the different alphabets, character encoding sets include some special characters that modify the text around them. The ASCII set includes control characters like the null character, which is used as a string terminator, a carriage return to start a new line, and a tab to insert a tab stop. Unicode specifies more groups of special characters, like word joiners and separators. These control where line breaks are allowed within and between words. NOTE: Most operating systems include a character map application or menu that you can use to insert special characters and/or emoji. Updated October 18, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/character Copy

Character Encoding

Character Encoding While we view text documents as lines of text, computers actually see them as binary data, or a series of ones and zeros. Therefore, the characters within a text document must be represented by numeric codes. In order to accomplish this, the text is saved using one of several types of character encoding. The most popular types of character encoding are ASCII and Unicode. While ASCII is still supported by nearly all text editors, Unicode is more commonly used because it supports a larger character set. Unicode is often defined as UTF-8, UTF-16, or UTF-32, which refer to different Unicode standards. UTF stands for "Unicode Transformation Format" and the number indicates the number of bits used to represent each character. From the early days of computing, characters have been represented by at least one byte (8 bits), which is why the different Unicode standards save characters in multiples of 8 bits. While ASCII and Unicode are the most common types of character encoding, other encoding standards may also be used to encode text files. For example, several types of language-specific character encoding standards exist, such as Western, Latin-US, Japanese, Korean, and Chinese. While Western languages use similar characters, Eastern languages require a completely different character set. Therefore, a Latin encoding would not support the symbols needed to represent a text string in Chinese. Fortunately, modern standards such as UTF-16 support a large enough character set to represent both Western and Eastern letters and symbols. Updated September 24, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/characterencoding Copy

ChatGPT

ChatGPT ChatGPT is an artificial intelligence chatbot developed by OpenAI that provides conversational responses to text prompts. It can answer questions, summarize articles, and provide creative prompts. While the chatbot can formulate humanlike responses, its algorithm is not error-proof and may produce inaccurate statements. The ChatGPT language model analyzes large data sets across thousands of topics. Examples of input include news articles, fiction and non-fiction books, social media posts, chat logs, and text from various websites. The model uses machine learning to process the data and find patterns and relationships between words. Once it identifies patterns, it uses that data to create inferences and generate new text — similar to how a mobile device's autocomplete feature suggests words as you type. After some fine-tuning by its developers, the model is able to generate long, natural-language responses to text prompts. Even though ChatGPT is more capable than older chatbots, it has its limitations. Any flaws in its data set, such as biased language or perspectives, can carry over into its responses. It also lacks the intuition, emotion, and contextual awareness of interpersonal conversation. Therefore, ChatGPT may not provide relevant or natural-sounding output in some instances. NOTE: A chat bot like ChatGPT can "hallucinate" content — generating text that looks convincing on the surface but, upon closer inspection, is factually incorrect or contextually irrelevant. Because ChatGPT aggregates information and does not provide references for its sources, it may be difficult to determine how much of its output is factually sound. Updated February 21, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/chatgpt Copy

Checksum

Checksum A checksum is a value used to verify the integrity of a file or a data transfer. In other words, it is a sum that checks the validity of data. Checksums are typically used to compare two sets of data to make sure they are the same. Some common applications include verifying a disk image or checking the integrity of a downloaded file. If the checksums don't match those of the original files, the data may have been altered or corrupted. A checksum can be computed in many different ways, using different algorithms. For example, a basic checksum may simply be the number of bytes in a file. However, this type of checksum is not very reliable since two or more bytes could be switched around, causing the data to be different, though the checksum would be the same. Therefore, more advanced checksum algorithms are typically used to verify data. These include cyclic redundancy check (CRC) algorithms and cryptographic hash functions. It is rare that you will need to use a checksum to verify data, since many programs perform this type of data verification automatically. However, some file archives or disk images may include a checksum that you can use to check the data's integrity. While it is not always necessary to verify data, it can be a useful means for checking large amounts of data at once. For example, after burning a disc, it is much easier to verify that the checksums of the original data and the disc match, rather than checking every folder and file on the disc. Both Mac and Windows include free programs that can be used to generate and verify checksums. Mac users can use the built-in Apple Disk Utility and Windows users can use the File Checksum Integrity Verifier (FCIV). Updated April 14, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/checksum Copy

Chip

Chip Technically speaking, a computer chip is a piece of silicon with an electronic circuit embedded in it. However, the word "chip" is often used as a slang term that refers to various components inside a computer. It typically describes an integrated circuit, or IC, such as a central processor or a graphics chip, but may also refer to other components such as a memory module. While "chip" is a somewhat ambiguous term, it should not be confused with the term "card." For example, a laptop might have a graphics chip embedded in the motherboard, while a desktop computer may contain a graphics card connected to a PCI or AGP slot. A graphics card may contain a chip, but the chip cannot contain a card. Similarly, a CPU may contain a chip (the processor), but it may also contain several other components. Therefore, the term "chip" can be used to refer to specific components, but should not be used describe multiple components that are grouped together. Updated June 25, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/chip Copy

Chiplet

Chiplet A chiplet is a small, self-contained block of integrated circuits that performs a specific function. Chip designers combine multiple chiplets with different uses into a single chip package. This type of chip package is similar to a System-on-a-Chip (SoC) processor, which combines several distinct components into a single chip. However, an SoC is manufactured as one piece, while chiplets are manufactured separately and combined later. Monolithic SoC designs combine every element into a single piece of silicon — the CPU, GPU, memory, I/O controllers, and other components. In a chiplet design, each chiplet is designed and manufactured independently, then combined into a chip package in a later step. High-speed interconnects join each chiplet in a chip package to allow data to flow between them as if they were part of a single chip. The AMD EPYC series of chips is designed using chiplets Chiplet-based designs provide several important benefits over normal SoC designs: Modularity — Chip designers can create chiplets for specific functions and combine them as necessary in a chip package. Specific chiplets within a design can receive updates without the entire chip receiving a redesign. Chip designers can even integrate chiplets from other manufacturers into their designs. Improved Yields — The smaller individual chiplets are easier to manufacture at scale than large, complex SoC chips. A single defect in an SoC may cause the entire chip to be unusable, but a manufacturer can test each individual chiplet to confirm it works before adding it to a chip package. Multiple Processes — Each chiplet within a chip package can be manufactured using the most suitable process for that chiplet. Components like CPU and GPU cores may use an advanced fabrication process that maximizes performance, while other components whose performance is not as critical can use an older, less expensive process. Improved Performance and Efficiency — Chip designers can optimize a chip package for specific tasks by changing the chiplets it uses. For example, a chip package optimized for graphics performance would include more high-power GPU cores than another chip of the same family designed for power efficiency, even though the other chiplets in the package would be identical. NOTE: As of 2023, most major processor manufacturers have started using chiplets in some of their processor designs, including Intel, AMD, and Apple. Updated November 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/chiplet Copy

Chipset

Chipset A chipset is a group of integrated circuits that work together. It may refer to the design of a single component or may describe the relationship of multiple components within a computer system. For example, the chipset of video card describes the design of the card, while a motherboard chipset describes its layout and the different components it supports. Diagrams are often used to illustrate chipset designs. For example, a diagram of a video card chipset may contain the GPU, video RAM, and the PCI bus. It may also include lines and arrows that represent the circuitry between the components. Together, the components and circuitry make up the overall design of the chipset, and may also be called the architecture of the video card. A motherboard chipset diagram may include components such as the CPU, RAM, video card, and I/O ports. A detailed diagram may also include lesser-known components, such as the northbridge, southbridge and frontside bus. Regardless of level of detail, all motherboard chipset diagrams include lines and arrows that visually describe the way data flows between the different components. Most internal computer components are designed for a specific chipset. Therefore, if you decide to upgrade any internal hardware, it is important to know what type of chipset your computer uses. In many cases, a computer's chipset is synonymous with the chipset of the motherboard. Once you find out what chipset your motherboard uses, you can determine what components are compatible with your machine. Updated July 31, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/chipset Copy

Chromebook

Chromebook A Chromebook is a laptop that runs Google's Chrome OS operating system. While Google sells its own Chromebook model, the Chromebook Pixel, many other manufacturers offer Chromebooks as well. Examples include Dell, HP, Toshiba, Samsung, ASUS, and Acer. Chromebooks are designed to be inexpensive and highly portable. They are considered thin clients since they have minimal internal storage. Unlike traditional laptops, Chromebooks are designed to run cloud-based applications and store data online. While the Chrome OS and some applications can run offline, Chromebooks work best when used with an Internet connection. The Chrome OS includes several Google apps, such as the Chrome web browser, Gmail, Google+, and YouTube applications. It also runs the Google Drive office suite and related apps such as Google Docs, Google Drawings, and Google Forms. Third party applications can be downloaded from the Chrome Web Store. Some Android apps can also run on Chrome OS via Google's App Runtime for Chrome (ARC). Since Chromebooks do not run Windows or OS X, they do not natively support many traditional applications, such as Microsoft Office. However, you can run online versions of Word, Excel, and other common applications from the Chrome OS or through the Chrome web browser. These applications run on a remote server, but look and function like traditional desktop applications. Chromebooks also support remote access software, which allows you operate Windows or OS X computers from a Chromebook. Updated April 7, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/chromebook Copy

CIFS

CIFS Stands for "Common Internet File System." CIFS is a standard file system designed for sharing files over the Internet. It is part of the SMB protocol, which allows multiple types of computers to share data and peripherals over a network. CIFS enables users to access files remotely from multiple platforms, including Windows, Mac, Linux, and others. Each operating system has its own file system, which defines how files and folders are organized. For example, most Windows computers use NTFS, while Macs user HFS. Proprietary file systems are fine when accessing files locally (from the computer itself), but it can cause compatibility issues when users try to access files from a remote system. If the remote device does not recognize the file system of the computer, it won't be able read the files. CIFS solves this problem by serving as a universal file system that is supported by multiple platforms. The Common Internet File System provides a standard set of commands that computers can use to access a remote system and read and write files remotely. It supports both anonymous file transfers and authenticated access, which can be used to prevent unauthorized access to certain folders and files. CIFS also includes file locking, which prevents multiple users from editing the same file at the same time. Updated December 19, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cifs Copy

Circuit

Circuit In electronics, a circuit is a closed path that allows electricity to flow from one point to another. It may include various electrical components, such as transistors, resistors, and capacitors, but the flow is unimpeded by a gap or break in the circuit. A flashlight is an example of a basic circuit. When the switch is off, the circuit is not closed, meaning electrical current will not flow from the batteries to the flashlight's bulb. When you move the switch to the on position, a piece of metal in the flashlight physically closes the gap in the circuit. Electricity from the batteries then flows to the bulb making it light up. In computing, the term "circuit" is used more liberally and may be used to reference a circuit board or an integrated circuit. The internal workings of computers and other electronic devices are comprised of these components, which may each contain hundreds or thousands of individual circuits. The large number of circuits inside computers allow them to route data to different locations and perform complex calculations. For example, a chip may route graphics operations to the GPU and other operations to the CPU. These processors contain logic gates that can rapidly open and close circuits. Modern processors have so many circuits and transistors, they can perform billions of instructions every second. Updated April 22, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/circuit Copy

CISC

CISC Stands for "Complex Instruction Set Computing," and pronounced "sisk." CISC is a type of processor design that includes a large set of computer instructions in its built-in instruction set. CISC instructions can vary in length, from simple tasks to multi-step operations, with longer tasks taking several clock cycles. CISC is one of two major design philosophies in instruction set design, along with Reduced Instruction Set Computing (RISC). The x86 processor architecture, which includes processors made by Intel and AMD, is the most widely-used type of CISC processor. A single instruction in a CISC instruction set may perform multiple operations — for example, a single instruction may tell the processor to load values from two memory banks, multiply them, then store the result. Integrating complex instructions directly into the processor's instruction set requires a software compiler to create fewer lines of assembly code, which in turn require less memory to store instructions. CISC vs RISC CISC and RISC are two different ideas of instruction set design. Each has its own advantages and drawbacks when compared to the other. CISC instruction sets contain more instructions that can handle more complicated tasks over several clock cycles, while RISC instruction sets include fewer instructions that each take a single clock cycle. When several RISC instructions are chained together to complete the same task as a single CISC instruction, the number of clock cycles required is often the same. CISC processors also require more transistors to store the larger instruction sets than RISC processors. RISC processors can use those transistors for memory storage instead, or use fewer transistors for a less-complex chip design — which can be smaller, require less power, and produce less heat. Updated December 8, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cisc Copy

Class

Class A class is used in object-oriented programming to describe one or more objects. It serves as a template for creating, or instantiating, specific objects within a program. While each object is created from a single class, one class can be used to instantiate multiple objects. Several programming languages support classes, including Java, C++, Objective C, and PHP 5 and later. While the syntax of a class definition varies between programming languages, classes serve the same purpose in each language. All classes may contain variable definitions and methods, or subroutines that can be run by the corresponding object. Below is an example of a basic Java class definition: class Sample {    public static void main(String[] args)    {       String sampleText = "Hello world!";       System.out.println(sampleText);    } } The above class, named Sample, includes a single method named main. Within main, the variable sampleText is defined as "Hello world!" The main method invokes the System class from Java's built-in core library, which contains the out.println method. This method is used to print the sample text to the text output window. Classes are a fundamental part of object-oriented programming. They allow variables and methods to be isolated to specific objects instead of being accessible by all parts of the program. This encapsulation of data protects each class from changes in other parts of the program. By using classes, developers can create structured programs with source code that can be easily modified. NOTE: While classes are foundational in object-oriented programming, they serve as blueprints, rather than the building blocks of each program. This is because classes must be instantiated as objects in order to be used within a program. Constructors are typically used to create objects from classes, while destructors are used to free up resources used by objects that are no longer needed. Updated April 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/class Copy

Clean Install

Clean Install A clean install is an operating system (OS) installation that overwrites all other content on the hard disk. Unlike a typical OS upgrade, a clean install removes the current operating system and user files during the installation process. When a clean install finishes, the hard disk only contains the new operating system, similar to a computer that is used for the first time. In most cases, a clean install is not necessary when upgrading your operating system. It is much easier and safer to perform a standard "upgrade and install," which simply upgrades the necessary files and leaves the user files in place. However, sometimes an OS upgrade is not possible because important files have become lost or corrupted. In this case, a clean install may be the only option. Some users may also prefer to perform a clean install so that no lingering problems from the previous OS will affect the newly installed operating system. Additionally, a clean install may be appropriate when installing an OS on a new hard drive or when transferring ownership of a computer to another person. Both Windows and Mac OS X allow you to perform a clean install when upgrading your operating system. The installer will give you the choice between a standard upgrade (typically the default option) and a clean installation near the beginning of the installation process. Windows 7 also lets you format and partition your installation disk if you select a clean install. In Mac OS X, you can use the Disk Utility program to format or partition your drive before you perform the clean install. NOTE: Since a clean install wipes out all data on the primary hard disk, it is crucial to back up your data before performing the installation. While it is wise to make sure you have a recent backup of your data before any OS upgrade, it is extremely important when performing a clean install. Backing up your data to an external hard drive or another computer are both good options. It is also smart to check your backup and make sure it contains all the files you need so you do not accidentally lose any files. Updated January 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/clean_install Copy

Cleanroom

Cleanroom A cleanroom is a controlled indoor space that maintains extremely low levels of airborne particles and contaminants. These rooms are essential for industries that produce highly sensitive equipment, such as microprocessors, pharmaceuticals, and medical devices, where even microscopic dust particles can compromise product quality and safety. To minimize contamination, cleanrooms utilize high-efficiency particulate air (HEPA) or ultra-low penetration air (ULPA) filters that continually circulate and purify the air. They often employ sophisticated airflow designs, which direct purified air in a consistent, unidirectional stream, preventing particles from settling on surfaces or products. Cleanrooms also regulate temperature, humidity, and air pressure to maintain optimal conditions and reduce the risk of interference with production processes. Monitoring systems continuously assess particle levels, temperature, and other parameters to ensure the room remains within the required limits. Cleanroom workers holding a flexible circuit Personnel in cleanrooms must wear specialized protective garments, often called "bunny suits," which cover them entirely to prevent contaminants from entering the environment. These suits resemble space attire but are lightweight and designed to allow for ease of movement in the controlled setting. Cleanroom Classifications Cleanrooms are classified according to international standards, such as the ISO 14644-1 standard, which defines the maximum permissible particle count for various particle sizes. For example, an ISO Class 6 cleanroom (Federal Standard Class 1000) permits up to 1,000 particles larger than 0.5 microns per cubic foot of air. An ISO Class 5 cleanroom (Federal Standard Class Class 100) allows no more than 100 such particles. Most CPU fabrication requires Class 6 or higher. NOTE: Cleanrooms may also be called "clean rooms." Updated October 28, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cleanroom Copy

Client

Client Businesses have clients and servers have clients. In both instances, there exists a one-to-many relationship. Just like a business may have several clients, a server can communicate with multiple clients. In computer networking, this is called the client-server model. A client is any device that communicates with a server. It may be a desktop computer, laptop, smartphone, or any other network-compatible device. In a home network, "smart" devices, such as Wi-Fi-enabled thermostats, lights, and appliances, are considered clients. In an office network, systems that access files from network-attached storage are clients of the file server. Most networks allow client-to-client communication, though the data flows through a central point, such as a router or switch. On a larger scale, whenever you access a website, your device is the client, and the web server hosting the website is the server. From a software perspective, your web browser is the client software and the program that responds to requests on the web server (Apache, IIS, etc.), is the server software. Similarly, when you check your email, you connect to a mail server. Your device is a client of the mail server, and your email program or webmail interface is the client software. Updated February 6, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/client Copy

Client-Server Model

Client-Server Model The client-server model describes how a server provides resources and services to one or more clients. Examples of servers include web servers, mail servers, and file servers. Each of these servers provide resources to client devices, such as desktop computers, laptops, tablets, and smartphones. Most servers have a one-to-many relationship with clients, meaning a single server can provide resources to multiple clients at one time. When a client requests a connection to a server, the server can either accept or reject the connection. If the connection is accepted, the server establishes and maintains a connection with the client over a specific protocol. For example, an email client may request an SMTP connection to a mail server in order to send a message. The SMTP application on the mail server will then request authentication from the client, such as the email address and password. If these credentials match an account on the mail server, the server will send the email to the intended recipient. Online multiplayer gaming also uses the client-server model. One example is Blizzard's Battle.net service, which hosts online games for World of Warcraft, StarCraft, Overwatch, and others. When players open a Blizzard application, the game client automatically connects to a Battle.net server. Once players log in to Battle.net, they can see who else is online, chat with other players, and play matches with or against other gamers. While Internet servers typically provide connections to multiple clients at a time, each physical machine can only handle so much traffic. Therefore, popular online services distribute clients across multiple physical servers, using a technique called distributed computing. In most cases, it does not matter which specific machine users are connected to, since the servers all provide the same service. NOTE: The client-server model may be contrasted to the P2P model, in which clients connect directly to each other. In a P2P connection, there is no central server required, since each machine acts as both a client and a server. Updated June 17, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/client-server_model Copy

Clip Art

Clip Art Clip art is a collection of pictures or images that can be imported into a document or another program. The images may be either raster graphics or vector graphics. Clip art galleries many contain anywhere from a few images to hundreds of thousands of images. Clip art is typically organized into categories, such as people, objects, nature, etc., which is especially helpful when browsing through thousands of images. Most clip art images also have keywords associated with them. For example, a picture of a female teacher in a classroom may have the keywords "school," "teacher," "woman," "classroom," and "students" associated with it. Most clip art programs allow you to search for images based on these keywords. When you find a clip art image you want to use, you can copy it to your computer's clipboard and paste it into another program, such as Photoshop or Microsoft Word. You may also be able to export the image to the Desktop or another folder on your hard disk. Most clip art is royalty free, meaning you can use the images without paying royalties to the creators of the images. So if you buy a clip art package with 50,000 images for $50.00, you are only paying 1/10 of one cent for each image. That's a pretty good deal. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/clipart Copy

Clipboard

Clipboard The clipboard is a special location in a computer's memory that temporarily stores copied data. It can hold any data type, including text, images, video, and audio. It can even store entire files and folders. Data can be added to the clipboard using the Copy or Cut commands and placed in a new location using the Paste command. The clipboard can (in most cases) store one piece of data at a time. It does not clear its contents when you paste them, so you can paste the same piece of data multiple times in different locations without copying it again. For example, you can copy an email address from a contact card and paste it into a web form or a text document without copying it multiple times. Copying or cutting replaces the existing clipboard contents with new data. The clipboard's previous contents cannot be recovered once replaced, so if you cut something important from a document, paste it somewhere before copying or cutting something else. Since the clipboard stores its contents in the system RAM, it is also cleared when a computer is shut down or rebooted. In recent years, popular operating systems have improved the built-in clipboard to make it more powerful and convenient. For example, Windows 10 (and later versions) has an optional Clipboard History feature that lets you store multiple items in the clipboard and choose which item to paste using a separate keyboard shortcut: Windows Key + V. Third-party clipboard management utilities can add a similar feature to other operating systems. The Windows 10 Clipboard History feature The Universal Clipboard feature in macOS, iOS, and iPadOS allows you to copy data from one device and paste it on another. For example, you can copy an image from your iPhone and paste it into a document on your Mac, as long as you're signed into the same iCloud account on both devices and they are on the same local network. Updated July 5, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/clipboard Copy

CLOB

CLOB Stands for "Character Large Object." A CLOB is a data type used by various database management systems, including Oracle and DB2. It stores large amounts of character data, up to 4 GB in size. The CLOB data type is similar to a BLOB, but includes character encoding, which defines a character set and the way each character is represented. BLOB data, on the other hand, consists of unformatted binary data. Common data types used for storing character data include char, varchar, and text. Some database management systems also support additional text data types such as tinytext, mediumtext, and longtext. If the standard character data types are not large enough for a certain database field, the CLOB data type may be used. Since CLOB data may be very large, some database management systems do not store the text directly in the table. Instead, the CLOB field serves as an address, which references the location of the data. CLOBs provide a way to store unusually large amounts of text, such as an entire book or publication. However, some database programs cannot run certain text operations on CLOB fields, such SQL commands with the "LIKE" condition. Therefore, it is often better to use other character data types for smaller text values. Updated July 9, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/clob Copy

Clock Cycle

Clock Cycle A clock cycle, or simply a "cycle," is a single electronic pulse of a CPU. During each cycle, a CPU can perform a basic operation such as fetching an instruction, accessing memory, or writing data. Since only simple commands can be performed during each cycle, most CPU processes require multiple clock cycles. In physics, the frequency of a signal is determined by cycles per second, or "hertz." Similarly, the frequency of a processor is measured in clock cycles per second. Since modern processors can complete millions of clock cycles every second, processor speeds are often measured in megahertz or gigahertz. The frequency of a processor is also known as the processor's clock speed. While the clock speed is important in determining the processor's overall performance, it is not the only factor. Since processors have different instruction sets, they may differ in the number of cycles needed to complete each instruction, or CPI (cycles per instruction). Therefore, some processors can perform faster than others, even at slower clock speeds. Updated July 24, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/clockcycle Copy

Clock Speed

Clock Speed Clock speed measures how many processing cycles a processor can complete per second. It is typically measured in megahertz (millions of cycles per second) or gigahertz (billions of cycles per second) — for example, a processor with a clock speed of 3.0 GHz can perform 3 billion cycles per second. A processor's clock speed is one of its primary specifications and contributes to its total performance. A CPU's clock speed is determined by multiplying a base clock signal (provided by the motherboard) by an internal multiplier value on the processor itself. For example, if the base clock signal is 100 MHz and the processor multiplier is 30x, the processor runs at a clock speed of 3 GHz. Some high-end processors have an unlocked multiplier that can be changed in the BIOS or UEFI settings, allowing you to overclock the processor. A CPU may also have a turbo mode (called Turbo Boost on Intel CPUs and Precision Boost on AMD CPUs) that automatically increases clock speed when the processor is under high demand. However, this consumes more energy and thus produces more heat, so a processor can only engage this mode temporarily before returning to its base speed. Processor clock speed displayed in Windows System settings When all other variables are the same, a processor with a higher clock speed will outperform a slower one. However, other architectural factors can also affect processor performance, limiting how well clock speed can serve as a point of comparison. For example, some architectures may use optimized instruction sets to reduce how many cycles are needed to perform certain functions. Increasing the number of processor cores can also increase performance without increasing clock speed, allowing one processor to perform more simultaneous calculations. In many cases, a newer processor with a more optimized architecture will outperform an older processor running at a higher clock speed, so it is more helpful to consult performance benchmarks when comparing CPUs instead of relying on clock speed alone. Updated October 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/clock_speed Copy

Clone

Clone In computing, the word "clone" can mean several things. It may refer to a disk whose contents have been duplicated exactly from another disk in a process called "disk cloning." It may refer to software that mimics the functionality of a pricier mainstream alternative. It may also refer to IBM-compatible computers made by other companies in the 1980s and 90s. Disk Clones A disk clone is a disk whose contents were copied from another disk using a disk cloning utility. Cloning a drive involves more than just copying a disk's files; it duplicates the entire file system, including all partitions and metadata. Disk cloning is often used for creating backups, duplicating a configuration for deployment, and preserving the contents of a disk for digital forensics. Cloning a disk may also involve creating a disk image file. SuperDuper! is a disk cloning utility for macOS Disk cloning functionality is generally provided by third-party utilities. Cloning a disk requires that it not be in use, which means that you cannot clone a computer's boot disk while the operating system is running. Some disk cloning utilities, like Clonezilla, create a bootable flash drive that you can load instead of your operating system, leaving the boot drive available for cloning. Application Clones An application clone is a software application that reproduces the functionality of another, more popular application. Clone apps may be made for people who cannot afford the original application, run an operating system that the original application does not support, or just want a free and open-source alternative. For example, Inkscape is a popular open-source Adobe Illustrator clone, providing similar functionality for creating vector drawings and illustrations. IBM Clones In the 1980s and 1990s, "IBM clone" or "IBM-compatible" meant that a computer used the same architecture as the original IBM Personal Computer (PC), but was built by another manufacturer. After the IBM PC debuted in 1981, other companies like HP, Dell, and Compaq built computers using the same Intel CPUs as the PC. Marketed as "IBM-compatible," these clones could run the same software as the IBM PC (including the MS-DOS operating system). Due to the hardware and software compatibility between IBM PCs and third-party clones, the x86 architecture running MS-DOS (and later Microsoft Windows) became an industry standard. The entire ecosystem of PC clones would eventually be known as "PCs." The term is still synonymous with computers running Windows on Intel CPUs well after IBM has stopped making personal computers. NOTE: IBM's PC was not the only computing platform to be cloned. In the mid-1990s, Apple briefly licensed the Macintosh platform to third-party manufacturers through the "Macintosh Clone Program." However, this program was short-lived and only allowed clones between 1994 and 1997. Updated August 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/clone Copy

Cloud

Cloud The term "cloud" comes from early network diagrams, in which the image of a cloud was used to indicate a large network, such as a WAN. The cloud eventually became associated with the entire Internet, and the two terms are now used synonymously. The cloud may also be used to describe specific online services, which are collectively labeled "cloud computing." Examples of popular cloud-based services include web applications, SaaS, online backup, and other types of online storage. Traditional Internet services like web hosting, email, and online gaming may also be considered part of the cloud since they are hosted on Internet servers, rather than users' local computers. Even social networking websites such as Facebook and LinkedIn are technically cloud-based services, since they store your information online. While "the cloud" is simply a buzzword for most consumers, it plays an important role for businesses. By moving software services to the cloud, companies can share data more efficiently and centralize their network security. Additionally, cloud-based virtualization can help businesses reduce the number of computer systems and software licenses they need to buy. The end result and a more efficient and less costly way of running a business. Updated May 30, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cloud Copy

Cloud Computing

Cloud Computing Cloud computing refers to applications and services offered over the Internet. These services are offered from data centers all over the world, which collectively are referred to as the "cloud." This metaphor represents the intangible, yet universal nature of the Internet. The idea of the "cloud" simplifies the many network connections and computer systems involved in online services. In fact, many network diagrams use the image of a cloud to represent the Internet. This symbolizes the Internet's broad reach, while simplifying its complexity. Any user with an Internet connection can access the cloud and the services it provides. Since these services are often connected, users can share information between multiple systems and with other users. Examples of cloud computing include online backup services, social networking services, and personal data services such as Apple's MobileMe. Cloud computing also includes online applications, such as those offered through Microsoft Online Services. Hardware services, such as redundant servers, mirrored websites, and Internet-based clusters are also examples of cloud computing. Updated April 23, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cloud_computing Copy

Cloud Waste

Cloud Waste Cloud waste is unnecessary spending on cloud services. It refers to unused or underused services during one or more billing cycles, which results in "wasted" spending. Many types of cloud waste exist. Some examples include: Unused apps in Adobe Creative Cloud Unused Microsoft 365 subscriptions Underutilized server bandwidth Unnecessary cloud security features Underused cloud computing services Underused monthly mailing list email sends Cloud waste affects everyone from home users to enterprise businesses. For example, a student who purchases an annual Creative Cloud subscription but only uses it for six months may waste several hundred dollars a year. A company that purchases 10,000 Microsoft 365 licenses or "seats" but only uses 8,000 may waste thousands of dollars a month in unused subscription fees. In recent years, many companies have switched from desktop software to SaaS solutions to reduce costs. However, CTOs and other managers must monitor SaaS usage since unnecessary subscriptions can go overlooked for months or even years. Cloud computing platforms like Azure, AWS, and Google Cloud provide usage-based billing, which helps reduce cloud waste. Similar to an electrical meter, these services track each client's specific usage, such as compute time, API calls, bandwidth used, etc. Still, many cloud platforms have base monthly service fees and charge for add-ons that may go underutilized over time. Cloud-based services provide an efficient and cost-effective way for companies to manage IT solutions. However, it is essential for individuals and organizations to regularly review their cloud subscriptions to make sure they are not wasting money on unnecessary services. Updated June 23, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cloud_waste Copy

Cluster

Cluster In computing, a cluster may refer to two different things: 1) a group of sectors in a storage device, or 2) a group of connected computers. 1) A group of sectors A sector is the smallest unit that can be accessed on a storage device like an HDD or SSD. A cluster, or allocation unit, is a group of sectors that make up the smallest unit of disk allocation for a file within a file system. In other words, a file system's cluster size is the smallest amount of space a file can take up on a computer. A common sector size is 512 bytes. A common cluster size is 8 sectors. Therefore, many file systems have a minimum cluster size of 4 kibibytes (8 x 512 bytes). Most files require a large number of clusters, which means the file contents are spread across multiple clusters on the drive. Often the data can be written in contiguous blocks so that the file contents are stored in one physical location. However, when a hard drive begins to fill up, there may not be enough contiguous clusters available to save large files in a single area. Instead, they must be written in multiple locations on the disk. This is called fragmentation and can slow down the hard drive's read and write speeds. 2) A group of connected computers A cluster can also refer to a group of machines that work together that perform a similar function. Unlike grid computing, a computer cluster is controlled by a single software program that manages all the computers or "nodes" within the cluster. The nodes work together to complete a single task. This process is called "parallel computing" since the nodes perform operations in tandem. Computer clusters can range from two machines to hundreds of connected computers. Small clusters are often used to improve the performance of web and online gaming services by handling multiple incoming requests in parallel. A web farm, for example, is a type of cluster that provides low latency access to websites. Large clusters can be used to perform scientific calculations or to run a large number of complex algorithms. For example, a large cluster may be used to apply textures and lighting effects to 3D models in each frame of an animated movie. Updated April 22, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cluster Copy

CMOS

CMOS Stands for "Complementary Metal Oxide Semiconductor." It is a technology used to produce integrated circuits. CMOS circuits are found in several types of electronic components, including microprocessors, batteries, and digital camera image sensors. The "MOS" in CMOS refers to the transistors in a CMOS component, called MOSFETs (metal oxide semiconductor field-effect transistors). The "metal" part of the name is a bit misleading, as modern MOSFETs often use polysilicon instead of aluminum as the conductive material. Each MOSFET includes two terminals ("source" and "drain") and a gate, which is insulated from the body of the transistor. When enough voltage is applied between the gate and body, electrons can flow between the source and drain terminals. The "complimentary" part of CMOS refers to the two different types of semiconductors each transistor contains — N-type and P-type. N-type semiconductors have a greater concentration of electrons than holes, or places where an electron could exist. P-type semiconductors have a greater concentration of holes than electrons. These two semiconductors work together and may form logic gates based on how the circuit is designed. CMOS Advantages CMOS transistors are known for their efficient use of electrical power. They require no electrical current except when they are changing from one state to another. Additionally, the complimentary semiconductors work together to limit the output voltage. The result is a low-power design that gives off minimal heat. For this reason, CMOS transistors have replaced other previous designs (such as CCDs in camera sensors) and are used in most modern processors. NOTE: CMOS memory in a computer is a type of non-volatile RAM (NVRAM) that stores BIOS settings and date/time information. Updated May 25, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cmos Copy

CMP

CMP Stands for "Consent Management Platform." A CMP is a software application that helps website and app owners track whether their users consent to the gathering of their data. This personal data helps provide targeted advertising based on the user's demographic information and online activity. The use of a CMP that properly tracks and manages user consent is required to track user data from EU residents. Multiple vendors provide CMPs that help track user preferences, allowing businesses to choose the one that fits their needs. All CMPs must have a baseline set of tools. When you visit a site that collects data using a CMP, they must provide a user interface for requesting your consent — typically, a message box that appears when you first visit the site or open the app. These messages should help you understand what data is collected, who may access it, and what it will be used for. Many also allow you to customize what you consent to by sharing only certain types of data with certain third parties. A pop-up provided by a CMP that allows you to customize which third parties can access your data A CMP must include a database that keeps a record of who consents and who opts out of data tracking. They must also allow you to withdraw consent later if you wish. A CMP may also provide the actual tracking mechanism — typically cookies — and a connection to a third-party data processing service that analyzes collected user data. The use of a CMP is required by the Transparency Consent Framework (TCF). The TCF is an open standard framework that outlines when user consent is needed, how to ask for it, how to track it, and what a website or app owner may do with the personal data they collect. The TCF was developed to help websites and advertisers comply with the General Data Protection Regulation (GDPR) law in the EU. Updated August 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cmp Copy

CMS

CMS Stands for "Content Management System." A CMS is a software tool that allows you to create, edit, and publish content. While early CMS software was used to manage documents and local computer files, most CMS systems are now designed exclusively to manage content on the Web. The goal of a CMS is to provide an intuitive user interface for building and modifying webpage content. Each CMS also provides a web publishing tool that allows one or more users to publish updates live on the Web. The editing component is called the content management application (CMA), while the publishing tool is called the content delivery application (CDA). These two components are integrated together in a CMS to streamline the web development process. Content management systems are available as installable applications and web-based user interfaces. While CMS software programs, such as Adobe Contribute, were popular for a few years, they have largely been replaced by web-based CMSes. Most people prefer a web interface, since it simplifies the website updating process. Additionally, most web-based CMSes are updated automatically, ensuring all users have the latest tools to manage their content. Several web-based CMS tools are available. The following are some of the most popular ones: WordPress - free web software designed for creating template-based websites or blogs Blogger - Google's blogging tool designed specifically for maintaining a blog Joomla - a flexible web publishing tool that supports custom databases and extensions Drupal - an open source platform often used for developing community-based sites Weebly - a web-based platform for building simple personal and business websites Wix - a collection of web publishing tools for creating a highly customizable website Some CMS tools are free to use, while others require a monthly fee. Many CMSes provide free basic components, but charge for high-quality templates, web hosting, custom domain names, or other features. Before deciding on a CMS, it is a good idea to review multiple options so you can choose the one that best fits your website goals. Updated March 28, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cms Copy

CMYK

CMYK Stands for "Cyan Magenta Yellow Black," or "Cyan Magenta Yellow Key." CMYK is a color model used for printing color images. It separates colors into four channels — cyan, magenta, yellow, and black. These colors correspond to the four ink colors used in home and commercial printers. Graphics and documents must be converted from RGB into CMYK before printing, either by image editing software or automatically by the printer driver. Some colors may appear duller after conversion to CMYK due to differences between the two models. RGB color spaces typically have a larger color gamut, since computer monitors can display more colors than mixed inks. A color photograph split into cyan, magenta, yellow, and black channels Unlike RGB and most other color models, CMYK is a subtractive model. All four CMYK color sliders set to 0% creates white, since that value represents the lack of ink on a blank page. Increasing the value of each color slider results in darker colors as more ink is added, until each slider at 100% results in black. CMYK color uses black ink in addition to the three colored inks for several reasons. Black ink is cheaper than color ink, so it is more efficient to use it whenever possible. Black ink also produces a truer black than a combination of color inks due to impurities present in the colors. Creating black by mixing colors also requires that the print heads or plates for the three colors are perfectly aligned; otherwise, text, lines, and other details will blur. NOTE: The "K" in the abbreviation represents black ink, but it actually stands for "key." This term comes from the plates used in a commercial four-color printing process. The three color ink plates create the areas of color on the page, but the black ink plate is responsible for the contrast and fine details in the resulting image. As a result, the black ink plate is known as the "key plate." Updated April 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cmyk Copy

Coaxial Cable

Coaxial Cable Coaxial (or “coax”) cable is a common type of cable used for transmitting data over long distances. It can carry either an analog or digital signal. While coax cables have many applications, they are most commonly used to transmit cable TV and Internet signals. Coax cables that run underground are typically thicker and more heavily insulated than the cables that connect your cable box or cable modem to the wall outlet. However, they all transmit data via a thin copper line in the middle of the cable. This wire is surrounded by a layer of insulation comprised of non-conductive or “dielectric” material. The dielectric layer is covered with one or more metallic shields that provides additional protection from signal interference. Finally, a protective plastic outer layer surrounds the entire cable. The heavy duty design of coaxial cables is what allows them to carry data over long distances with minimum signal degradation. In many cases, coax cables laid by cable companies several decades ago are sufficient to provide HDTV and high-speed Internet access simultaneously. However, certain coax cables (such as RG-59 cables) are designed for low bandwidth applications, like connecting a VCR to a TV, and may not provide enough bandwidth to carry a full HDTV signal. Coax cables labeled as “RG-6” are a better choice for HDTV and cable Internet service. Updated December 27, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/coaxial_cable Copy

Codec

Codec Codec is short for "coder-decoder." It is an algorithm used to encode data, such as an audio or video clip. The encoded data must be decoded when played back. A media codec is not equivalent to media compression, since it is possible to encode a file without compressing it. However, most codecs do compress the original data, reducing the size of the original file. This is important for multimedia files, since they often have large file sizes. Compressed files take up less disk space and can be downloaded more quickly. Generally, a codec reduces the file size of a media file, but increases the processing power required to play the file back correctly. Lossless vs Lossy Codecs Some codecs are lossless, meaning they do not reduce the quality of the original media file. Examples of lossless audio codecs include the Free Lossless Audio Codec (FLAC), and the Apple Lossless Audio Codec (ALAC). Video codecs that support lossless compression include H.264 and QuickTime RLE. A lossless codec can often reduce the file size of a media file to about 50% without altering the quality. Other codecs are lossy, meaning the compression reduces the quality of the media. Examples of lossy audio codecs are Adaptive Differential Pulse Code Modulation (ADPCM) and MPEG-1 Layer 3 (MP3). Common lossy video codecs include MPEG-2 and HEVC. Most lossy codecs provide a variable compression setting, which allows you to select how much to compress the media. For example, if you apply heavy compression to an audio file, it may reduce the file size to 10%, but the audio may sound like it has been compressed. If you use a lower compression setting that reduces the file size to 30%, it may be closer to the original file. NOTE: Lossy codecs are commonly applied to streaming media so the data can be transferred more quickly over the Internet. Updated June 14, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/codec Copy

Cold Boot

Cold Boot To perform a cold boot (also called a "hard boot") means to start up a computer that is turned off. It is often used in contrast to a warm boot, which refers to restarting a computer once it has been turned on. A cold boot is typically performed by pressing the power button on the computer. Both a cold boot and warm boot clear the system RAM and perform the boot sequence from scratch. However, unlike a cold boot, a warm boot may not clear all system caches, which store temporary information. Additionally, a cold boot performs a "power on self test" (POST), which runs a series of system checks at the beginning of the boot sequence. While a warm boot and cold boot are similar, a cold boot performs a more complete reset of the system than a warm boot. Therefore, if you are troubleshooting your computer, you may be asked to turn off your computer completely and perform a cold boot. This makes sure all temporary data is wiped from your system, which may help eliminate issues affecting your computer. Updated February 19, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cold_boot Copy

Collision

Collision In computer science, a "collision" has two different meanings. One occurs when two or more sets of data are modified and produce the same resulting value. The other is specific to networking and happens when two devices transmit data at the same time. 1. Data Collision A data collision may take place when hashing data or when calculating a checksum. A hash function reduces data to a smaller value and is often used in compression and cryptography. While the hash operation may save disk space, it is possible that two different inputs may produce the same output. Multiple hash functions can be used to avoid duplicate values when a collision occurs. Similarly, checksums are not guaranteed to be unique since they are smaller than the original data. While the probability is often very low, two different data sets can theoretically produce the same checksum value. A well-designed algorithm should minimize this risk. 2. Network Collision A network collision occurs when two or more devices attempt to transmit data over a network at the same time. For example, if two computers on an Ethernet network send data at the same moment, the data will "collide" and not finish transmitting. This is why most networking protocols confirm that packets has been received before transmitting additional data. Switches and routers can reduce collisions by checking if a transmission line is idle or "in use" before transmitting data. A common method is CSMA/CD or "Carrier-sense multiple access with collision avoidance." While it is possible to reduce collisions, they cannot be completely avoided. For example, if two systems determine a line is idle and then transmit data at exactly the same time, a collision may occur. This can be resolved by retransmitting the data after a random delay. Updated December 20, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/collision Copy

Color Depth

Color Depth Color depth, also known as bit depth, refers to the number of bits used to store color information for each pixel in a digital image. Using more bits per pixel allows a greater range of possible colors. Color depth can be measured using the total number of bits used for a pixel (bits per pixel) or the number of bits for each RGB channel (bits per channel). Most digital images and computer monitors use 24-bit color, which supports 16.7 million possible colors. Images and monitors that use a low color depth are limited in the range of colors they can show. For example, a single bit of color depth only provides two possible values for a pixel, resulting in a black-and-white image, while 4 bits of depth allow for 16 possible color values. Higher color depth allows a wider range of color information, but requires more data to store images and more bandwidth to send the video signal to the monitor. An image in 8-bit and 24-bit color Common Color Depths Since 16 million colors are more than the human eye can distinguish, 24-bit color is considered good enough for general use. However, some formats and workflows use other color depths. 8-bit color saves each pixel using a single byte of data, displaying a maximum of 256 possible colors. Most 8-bit images use indexed color, which creates a palette of 256 colors out of a wider gamut. They also use dithering to blend colors together more smoothly. .GIF and low-color .PNG images use 8-bit color. 24-bit color is referred to as True Color. It uses 8 bits per RGB channel to display 16.7 million possible colors. Most computer monitors are limited to 24-bit color. Some image formats add an 8-bit alpha (transparency) channel to use 32 bits per pixel. Most digital images and HD videos use this depth. 30-bit color is referred to as Deep Color. It uses 10 bits per RGB channel to display more than 1 billion possible colors. This depth allows for higher contrast and increased luminance levels, which is important for high-end HDR monitors and UHD 4K televisions. Most 4K videos use this depth. High-end digital cameras can capture images at even greater color depths when taking RAW photos, providing photographers with extra image data for more-precise photo editing. Depending on the sensor, digital cameras can take RAW photos with 12, 14, or even 16 bits per channel (36 to 48 bits total). Updated February 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/color_depth Copy

Color Model

Color Model A color model is a way to represent colors mathematically using sets of numbers. A color model defines several measurable attributes of color (such as hue, saturation, and brightness) and represents each attribute as a number within a range. Most color models use sets of three values to define a color, while some use four. Each color model measures a different set of attributes and the relationship between those attributes. The most common color model for displaying digital images on a computer is the RGB model, which separates color into red, green, and blue light. The most common model for printing images, CMYK, instead represents colors as a mix of cyan, magenta, yellow, and black inks. Other models measure other properties of color like hue, luminance, saturation, or the deviation from a specified middle color. The same color represented numerically by 4 color models Software applications can convert color information from one model to another. For example, an image editing application that supports multiple color models allows a graphic designer to primarily use the HSL color model, which makes subtle color adjustments more intuitive. Meanwhile, the computer automatically converts it to the RGB model when it sends the image to the monitor. While color models define how colors are measured numerically, they do not necessarily represent the limited range of colors a device can display. In most cases, it isn't useful to specify a color that cannot be shown. Software and hardware makers split many color models, particularly RGB, into smaller color spaces that use a limited subset of colors. Updated April 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/color_model Copy

Color Profile

Color Profile A color profile (also known as an ICC profile) is a set of configuration data that tells a computer how to display colors on a given device. Each color profile contains data about the total range of colors available (known as a gamut), and how to map color values to other standard color spaces. It may also include the performance characteristics of a specific device, like its color temperature and gamma. A computer's color management software uses color profiles to keep colors consistent between devices. Monitors, digital cameras, scanners, and printers come with color profiles provided by their manufacturers. These profiles include specific information about how a device captures or displays colors. They are created by measuring the device using specific sample colors, calibrating its settings until the sample colors are accurate, then saving those calibration details to its profile. Color profiles adjust the colors in a photograph between the camera, working color space, display, and printer While many color profiles are specific to certain devices, there are standard profiles for common color spaces. Generic color profiles for color spaces like sRGB, Adobe RGB, and LAB provide a common working profile for creating and editing digital images. When necessary, color management software converts colors from generic profiles to device-specific ones and vice versa. Computers translate color data between color profiles whenever an image is captured, edited, or viewed. For example, a digital camera includes its own built-in color profile based on its sensor's performance characteristics and embeds that color profile into every photograph it takes. When opened by software like Photoshop, the color management system converts colors from the embedded profile to a generic working color profile for editing, while also converting it to the monitor's color profile for display. If necessary, the software adjusts those colors to fit within the monitor's gamut while maintaining its appearance as closely as possible. Finally, when printing the photo, it is converted one more time to the printer's color profile, using color mapping tables to translate color values from RGB to CMYK. NOTE: Camera RAW images are not given a color profile by the camera but are assigned one once imported into the photographer's editing software. Updated May 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/color_profile Copy

Color Space

Color Space A color space is a specific range of colors a single image or monitor can display. Each color space covers a slice of the visible spectrum provided by a given color model. The boundaries of a particular color space, and the colors within those boundaries, are referred to as its gamut. Color spaces can limit the colors used in an image to only those that a device is able to display, efficiently using the color spectrum to not waste information on out-of-gamut colors. The most common color space is sRGB, which is considered the bare minimum standard that any computer should be able to show. Higher-end monitors — like those used by photo and video professionals, or HDR-capable televisions — can display a wider gamut and use more advanced color spaces. Digital cameras that save images directly to the JPEG format often require the photographer to choose between sRGB and Adobe RGB. Cameras that capture RAW photos allow the photographer to choose a color space later, during processing and editing. Most digital color spaces are based on the RGB color model, subdividing the available color spectrum into triangle-shaped slices. The larger the triangle, the wider the color gamut that color space can display. For example, the sRGB color space is comparatively small and can display fewer colors than bigger color spaces like Adobe RGB. Some other color spaces, like ProPhoto RGB, have such a large gamut that they can theoretically include colors outside the visible spectrum. However, a wider gamut also means larger steps between colors. For example, if you save an image using the Adobe RGB color space with only 8 bits per channel, the colors may be more vibrant at their extremes than the smaller sRGB space, but with increased banding in gradient areas. By increasing the color depth to 10 or 12 bits per channel, you can include more variation within a color space. Updated April 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/color_space Copy

Column

Column A column is a vertical group of values within a table. It contains values from a single field in multiple rows. In databases, columns may be defined as individual fields within a table. Each field has a name, such as Name, Address, or Phone Number. Therefore, when multiple values from a column are selected, they will all have similar information, such as a list of phone numbers. When defining columns in databases and spreadsheets, it is often possible to specify the type of data, such as a string, number, or date. This helps ensure all the data within a given column has a similar format. NOTE: While it is easy to get rows and columns confused, just remember that columns are vertical (like the columns used in architecture), while rows are horizontal, like rows of text. Updated June 8, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/column Copy

Command Line Interface

Command Line Interface A command line interface (or CLI) is a text-based interface used for entering commands. In the early days of computing, before the mouse, it was the standard way to interact with a computer. While the graphic user interface (GUI) has largely replaced CLIs, they are still included with several major operating systems, such as Windows and OS X. There are many different types of command line interfaces, but the two most popular ones are DOS (for Windows) and the bash shell (for Linux and OS X). Each CLI uses its own command syntax, but they all function in a similar way. For example, every CLI has a command prompt, which is displayed when the interface is ready to accept a command. When you type a command, it shows up next to the command prompt, and when you press Enter, the command is executed. Below are some examples of command prompts for different command line interfaces, with the root folder as the current directory. Windows (DOS): C:> OS X (bash shell): My-iMac:/ me$ Linux (bash shell): [root@myserver /]# The standard way to change directories in most CLIs is to use the command cd, followed by the directory path. If you are using Windows, you can type cd C:Users to access the Users folder. If you're using OS X, you can type cd /Volumes/SSD/Users (assuming the drive name is "SSD"). A few other commands are identical between DOS and the bash shell, but each CLI supports many different commands as well. For example, to list the contents of the current directory, you would type dir in DOS and ls in the bash shell. Most people prefer a standard graphical user interface to a command line one. However, some operations can actually be completed faster using a keyboard instead of a mouse. Therefore, CLIs are often used by network administrators and webmasters for common tasks like transferring files and checking server status. NOTE: A command line interface is sometimes referred to as a console or terminal window. OS X includes a utility called "Terminal" that functions as the CLI for OS X. Updated August 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/command_line_interface Copy

Command Prompt

Command Prompt A command prompt is part of a computer's text-based command line interface. The prompt appears as user and directory information following by a specific symbol and text cursor, indicating that the computer is ready for the user to enter commands. Most operating systems provide access to a command prompt as an alternative to its graphical user interface. Unix, Linux, and macOS users can access the command prompt using the Terminal app, while Windows users can access it by opening the Command Prompt utility, cmd.exe. The command prompt often includes several pieces of important information. On Unix, Linux, and macOS, it starts with the username of the current user, followed by the computer name. After that, it includes the current directory where it will carry out commands; by default, the command prompt starts in the user's home directory, indicated by a ~ symbol. Windows formats its command prompt differently, omitting the user and computer name and starting with the full path of the current directory (including the drive letter). Finally, the command prompt ends with a symbol separating it from the commands the user enters, like a $, %, or >. A typical Unix command prompt follows this pattern: username@computername:/directory$ A macOS command prompt follows a similar pattern, but with spaces between elements: username@Computer-Name directory % Finally, the Windows command prompt remains the same as the DOS prompt: C:UsersUsername> The Windows Command Prompt utility (cmd.exe) Using the command prompt to issue commands requires memorization of some syntax and commands. While you can consult online guides or written documentation for commands that you won't use that often, it is still helpful to be familiar with at least some basic navigation commands. The cd command (used by Unix-based operating systems and Windows) lets you change the directory you're currently working in, either by entering a full path or a relative path from the current location. To see the files and folders within the current directory, you can use the ls command on Unix, Linux, and macOS, or the dir command on Windows. Updated December 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/command_prompt Copy

Commercial Software

Commercial Software Computer software comes in three different flavors: freeware, shareware, and commercial software. Freeware is free to use and does not require any payment from the user. Shareware is also free to use, but typically limits the program's features or the amount of time the software can be used unless the user purchases the software. Commercial software requires payment before it can be used, but includes all the program's features, with no restrictions or time limits. Commercial software programs typically come in a physical box, which is what you see displayed in retail stores. While it's true that the software boxes are not as big as they used to be, they still contain the software CD or DVD and usually a "getting started" manual along with a registration key used for registering the product. Most commercial software programs ask that the user register the program so the company can keep track of its authorized users. Some commercial software programs, such as newer versions of Microsoft and Adobe programs, require the user to register the programs in order to continue using them after 30 days. While most commercial software programs are sold in the physical box, many software titles are now available as downloads. These downloads are typically made available from the company's website. The user pays for the program directly on the website and instead of receiving the software in the mail, the user downloads it to his computer. Another popular way of purchasing commercial software online is simply paying for a registration key, which unlocks the features of a shareware program. This upgrades the shareware program to the commercial version, which removes any feature limitations from the shareware version. Updated December 11, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/commercialsoftware Copy

Compact Flash

Compact Flash Compact Flash, often abbreviated CF, is a removable memory card format that uses nonvolatile flash memory. Digital cameras and other handheld electronic devices use CF cards to store and transfer digital photos, video, and other data. SanDisk introduced Compact Flash in 1994, and over time it became one of the most common removable memory card formats. However, it is becoming less common as newer formats like SD, microSD, and CFexpress become more popular, phasing out the CF slot in new digital cameras. Compact Flash memory cards are available in two physical sizes, known as Type I and Type II. Type I CF cards are 3.3 mm thick, while Type II are 5 mm thick. Type II cards can use that extra space to add features to the card, like Wi-Fi or Bluetooth wireless radios, to allow data transfers between cameras and computers without removing the card. Type I cards can fit into Type II slots, so a camera with a Type II CF slot can accept any CF card. Both types use a 50-pin connector based on a subset of the 68-pin PCMCIA connector. External card readers allow computers to read CF cards. A SanDisk Compact Flash card, showing its 50-pin connector During the height of its popularity, the Compact Flash format had several benefits over other types of memory cards. It was physically larger than other formats, but that meant that it was also more durable. It was available in higher capacities than other formats (up to 512 GB) and provided faster transfer speeds (up to 160 MB/s). Most big camera makers (including Canon and Nikon) included CF card slots on their consumer- and professional-level digital cameras. Over time, however, the SD card format became more popular due to its smaller size, which allowed for smaller camera bodies. In 2017, a successor format, CFexpress, was introduced with significantly faster read and write speeds necessary for high-resolution 4K and 8K video recording. Updated September 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/compact_flash Copy

Compatibility Layer

Compatibility Layer A compatibility layer is a software interface that allows applications written and compiled for one operating system or hardware architecture to run on a different host system. It receives API calls and other system functions designed for one environment and translates them into calls that the host system can understand. Compatibility layers can serve a similar function as emulation and virtualization by allowing software written for one environment (for example, Windows on an x86 processor) to run in either a different operating system on similar hardware (Linux on an x86 processor) or a similar operating system on a different hardware architecture (Windows on an ARM processor). They do this by intercepting every system API call the program tries to make to its original environment and converting them into the equivalent calls to the new host environment. Some notable compatibility layers include Wine (which allows Windows programs to run on Linux by implementing the Windows API), Rosetta 2 on macOS (which allows macOS apps compiled for x86 to run on Apple Silicon), and WOW64 on Windows (which translates Windows apps built for x86 to run on x86-64 and ARM processors). Running software in a compatibility layer often uses fewer system resources than emulating another environment or running a virtual machine. However, some software may experience errors or require additional configuration to work properly when using a compatibility layer. Comparison to Emulation and Virtualization Emulators, virtualization, and compatibility layers are all often used to run another environment's software. Each method accomplishes that task differently. An emulator uses software to recreate both a device's hardware and its software to emulate the entire environment. Since emulating hardware devices in software requires significant system resources, a host system needs to be significantly faster than the hardware it's emulating to achieve similar performance. A compatibility layer meant to translate between two hardware platforms will include a hardware emulator, using it only as needed instead of constantly emulating the entire environment. A virtual machine simulates an entire computer on the host machine's hardware. It runs a copy of an operating system and uses a software layer to translate virtualized hardware calls to the host machine's hardware. A compatibility layer only translates the calls it needs to, doing so without simulating hardware or software calls that it does not need to. Updated November 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/compatibility_layer Copy

Compile

Compile To compile a program is to convert it from human-readable source code into an executable file. Compiling a program also packages assets like images and icons into the program. It is necessary to compile a program before a computer can run it. When a developer writes a program, they do so in a programming language designed to be easy for them to write, read, and maintain. However, computers cannot run a program directly from source code — the CPU can only understand instructions in its native, low-level machine language. Compiling a program converts the instructions in the source code into a corresponding set of instructions in machine language, then encodes it into a binary executable file. Compiling a program translates source code files into an executable file Software development kits (SDKs) include programs called compilers for developers to compile their projects. A compiler converts source code into an executable file for a single operating system and processor architecture at a time — for example, Windows on x86-64 or Unix on ARM64. Some compilers allow a developer to include machine code for multiple architectures in a single file — for example, a universal binary for macOS that can run on both Intel and Apple Silicon processors — by separately compiling for each processor architecture, then combining them into a single packaged executable. NOTE: Programming languages that don't require compiling before running are known as "interpreted" languages. Instead, an "interpreter" program translates a program's source code into machine language at runtime. For example, a JavaScript program is not compiled but instead runs through a separate JavaScript interpreter. Updated March 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/compile Copy

Compiler

Compiler A compiler is a software program that compiles program source code files into an executable program. Integrated development environments (IDEs) include compilers as part of their programming software packages. The compiler takes source code files written in a high-level language, such as C, BASIC, or Java, and translates that code into a low-level language known as machine code. This code is specific to the selected processor type, such as an Intel x86-64 or ARM. The resulting program's underlying code is understood and executed by the processor when run from the operating system. Some compilers add extra optimizations while they compile, making the program run faster at the cost of a longer compilation process. These optimizations can reduce or eliminate the amount of assembly code (a more-efficient low-level language) that a programmer needs to write, especially when compiling for processors with complex instruction sets. Once a compiler compiles source code files into a program, the program cannot be modified. Therefore, a programmer must update the source code itself and recompile the program. Fortunately, most modern compilers can detect where changes were made and only recompile the modified files, which saves programmers a lot of time. Updated February 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/compiler Copy

Component

Component Computers are made up of many different parts, such as a motherboard, CPU, RAM, and hard drive. Each of these parts are made up of smaller parts, called components. For example, a motherboard includes electrical connectors, a printed circuit board (PCB), capacitors, resistors, and transformers. All these components work together to make the motherboard function with the other parts of the computer. The CPU includes components such as integrated circuits, switches, and extremely small transistors. These components process information and perform calculations. Generally speaking, a component is a element of a larger group. Therefore, the larger parts of a computer, such as the CPU and hard drive, can also be referred to as computer components. Technically, however, the components are the smaller parts that make up these devices. Component may also refer to component video, which is a type of high-quality video connection. A component connection sends the video signal through three separate cables — one for red, green, and blue. This provides better color accuracy than composite video (typically a yellow connector), which combines all the color signals into a single cable. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/component Copy

Compression

Compression Compression, or "data compression," is used to reduce the size of one or more files. When a file is compressed, it takes up less disk space than an uncompressed version and can be transferred to other systems more quickly. Therefore, compression is often used to save disk space and reduce the time needed to transfer files over the Internet. There are two primary types of data compression: File Compression Media Compression File compression can be used to compress all types of data into a compressed archive. These archives must first be decompressed with a decompression utility in order to open the original file(s). Media compression is used to save compressed image, audio, and video files. Examples of compressed media formats include JPEG images, MP3 audio, and MPEG video files. Most image viewers and media playback programs can open standard compressed file types directly. File compression is always performed using a lossless compression algorithm, meaning no information is lost during the compression process. Therefore, a compressed archive can be fully restored to the original version when it is decompressed. While some media is compressed using lossless compression, most image, audio, and video files are compressed using lossy compression. This means some of the media's original quality is lost when the file is compressed. However, most modern compression algorithms can compress media with little to no loss in quality. Updated April 7, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/compression Copy

Computer

Computer A computer is a programmable machine capable of executing a programmed list of instructions, accepting input, and returning output. While many mechanical devices can technically function as computers, the term commonly refers to digital electronic computers. All computers are a combination of hardware and software that are useless without each other. The term "hardware" refers to the physical parts of a computer, while "software" refers to the programs that tell the hardware what to do. All computers include a common set of hardware components: A motherboard is a printed circuit board that connects every other component via communication pathways called buses. A CPU performs data processing functions and serves as the "brain" of the computer. System memory stores temporary data for the CPU to access quickly, like active programs and open files. A storage disk provides long-term data storage, keeping applications and files available for access when needed. Other components can expand the computer's capabilities. Video cards connect a computer to a monitor, draw the user interface, render 3D graphics, and decode compressed video. Input devices like keyboards, mice, and touchscreens give a computer's user a way to interact with it and issue commands. Network adapters let a computer talk to other computers over a local network or the internet. USB ports and other external interfaces allow you to connect peripherals. Four common computer form factors — a laptop, all-in-one, tablet, and smartphone The most important software running on a computer is its operating system. This software communicates directly with the computer's hardware and allows other software applications to run. The combination of a computer's hardware architecture and operating system is known as a platform. Computers that share a platform can often run the same programs as each other. Computer Form Factors Computers come in several shapes and sizes, known as form factors. Some form factors allow a computer's owner to customize its hardware configuration, while others are tightly-integrated devices. Desktop computers are modular personal computers made up of discrete components assembled inside a chassis. A desktop computer's owner may replace or upgrade hardware components to improve a computer's performance. Desktop computers connect to external displays and input devices. Laptops are tightly integrated portable personal computers whose foldable cases include a monitor, keyboard, and trackpad. They include rechargeable batteries that allow them to run without a constant power connection. Unlike desktops, the hardware is often not replaceable or upgradeable. All-in-one computers are a hybrid between desktops and laptops. They're tightly integrated like laptops, and built into a chassis with a large monitor. Like desktops, they require separate input devices like keyboards and mice. They are not portable, but are more compact than desktop computers. Smartphones and tablets are even more compact than laptops, integrating a computer and battery behind a touchscreen. While they support external keyboards, most interaction with a smartphone or tablet happens through the touchscreen. Rack-mounted computers are often used as dedicated servers. These computers use flat enclosures that fit into standard rack mounts, which can store several computers stacked vertically. These computers often don't connect to monitors, keyboards, or mice — instead, they are controlled and monitored remotely over a network. Mainframes and supercomputers are large-scale, high-powered computers designed for computationally-intensive tasks. They can perform far more calculations per second than a personal computer and are used by large organizations for complex data analysis. Updated May 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/computer Copy

Computer Ethics

Computer Ethics Ethics is a set of moral principles that govern the behavior of an individual or group of people. Computer ethics is the application of moral principles to the use of computers and the Internet. Examples include intellectual property rights, privacy policies, and online etiquette, or "netiquette". Computers make it easy to duplicate and redistribute digital content. However, it is ethical to respect copyright guidelines. When using software, it is important to understand and follow the license agreement, or SLA. Using commercial software without paying for a license is considered piracy and is a violation of computer ethics. Hacking, or gaining unauthorized access to a computer system, is also an unethical way to use computers. As technology advances, computers and the Internet have an increasing impact on society. Therefore, computer ethics must be part of the discussion whenever creating new technologies. A modern example is how artificial intelligence affects existing jobs and human communication. When computer ethics is part of the conversation, it helps ensure new technologies positively affect society. Updated January 22, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/computer_ethics Copy

Computer Science

Computer Science Computer science is the study of computers and computing concepts. It includes both hardware and software, as well as networking and the Internet. The hardware aspect of computer science overlaps with electrical engineering. It covers the basic design of computers and the way they work. A fundamental understanding of how a computer "computes," or performs calculations, provides the foundation for comprehending more advanced concepts. For example, understanding how a computer operates in binary allows you to understand how computers add, subtract, and perform other operations. Learning about logic gates enables you to make sense of processor architecture. The software side of computer science covers programming concepts as well as specific programming languages. Programming concepts include functions, algorithms, and source code design. Computer science also covers compilers, operating systems, and software applications. User-focused aspects of computer science include computer graphics and user interface design. Since nearly all computers are now connected to the Internet, the computer science umbrella covers Internet technologies as well. This includes Internet protocols, telecommunications, and networking concepts. It also involves practical applications, such as web design and network administration. NOTE: While computer science (lowercase) refers to the general study of computers, Computer Science (capitalized) is an academic major offered at many colleges and universities. It is often abbreviated "CS" or "CompSci." Examples of Computer Science courses include: Introduction to Computing Fundamental Programming Concepts Data Structures Analysis of Algorithms Computing Theory Computer Science classes may also be specific to certain industries or topics. Examples include: Video Game Design Computer Graphics Database Systems Cryptography Networking Concepts Like other educational disciplines, Computer Science courses vary from beginner to advanced. The number of a Computer Science course typically indicates the level of the class. For example, an introductory class may be labeled CS 102, while an advanced class may be labeled CS 431. Updated August 22, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/computer_science Copy

Concatenation

Concatenation Concatenation is "the action of linking things together in a series" (Oxford English Dictionary). In computer programming terms, it refers to a function that links two or more text strings together into a single string. Databases and spreadsheets also use concatenation functions to merge data from separate fields and cells. Most programming languages include several concatenation methods to combine strings. When used with text strings, the + operator often works, but special concatenation functions can provide developers with more control. For example, both of the following JavaScript samples join two variables, adding a space in between: fullName = firstName+" "+lastName; fullName = firstName.concat(" ", lastName); While the + operator is most common, some languages use other operators. For example, Perl uses . to concatenate strings, while Visual Basic uses &. Two cells in a spreadsheet joined using a concatenate function Concatenation functions can also combine strings in database fields and spreadsheet cells. For example, if a customer information table in an SQL database keeps the first and last names in separate fields, you can use the CONCAT() function to select both and format them as a single string. Spreadsheet applications like Microsoft Excel can use the CONCATENATE() function to select the values of two referenced cells and display them as one joined value. For example, CONCATENATE(B2," ",C2) joins the contents of cells B2 and C2, with a space in between. Updated June 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/concatenation Copy

Configuration

Configuration A system configuration refers to the technical specifications (or "tech specs") of a device. For a computer, the primary specs include the CPU type and speed, system memory (or RAM), graphics card (GPU), storage device type and capacity, and operating system. A detailed configuration may include the motherboard, network adapters (including Ethernet and Wi-Fi capabilities), Bluetooth connectivity, and I/O ports. Below is an example configuration of a Dell Alienware gaming PC. Alienware Aurora R16 desktop computer CPU: Intel Core i9 14900KF (68 MB cache, 24 cores, up to 6.0 GHz) GPU: NVIDIA GeForce RTX 4090, 24 GB GDDR6X RAM: 32 GB (2 x 16 GB, DDR5, 5600 MT/s) Storage: 2 TB, M.2, PCIe NVMe SSD OS: Windows 11 Home System configurations also apply to virtual environments, which include similar specifications. Unlike laptops and (desktop PCs||desktop_computer), cloud configurations can update dynamically, adding RAM, storage, and processing power based on resources available across multiple systems. Understanding core specs like CPU, GPU, RAM, and storage is helpful when choosing a new computer or virtual system. For example, if you mainly use your PC for browsing the web and writing documents, you don't need a high-end CPU or lots of RAM, like someone who does video editing. If you play games on your PC, you might need a high-end video card, whereas an audio engineer only needs basic graphics capabilities. Generally, it's best to choose a system with slightly better specs than you need so that it will be sufficient for your current and future computing tasks. Updated October 11, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/configuration Copy

Console

Console A console is the combination of a monitor and keyboard. It is a rudimentary interface in which the monitor provides the output and the keyboard is used for input. While any computer with a monitor and keyboard may be considered a console, the term most often refers to a system used to control one or more servers. Early consoles in the 1970s and 1980s provided a text-only "command line" interface, in which a network admin could type commands at a command prompt. Modern consoles often provide a graphical interface that can be used to control machines on a local network (LAN) or provide remote access to remote systems. These consoles typically include a mouse for navigating graphical interfaces. The terms "console" and "terminal" are often used synonymously. However, "terminal" may also be used to describe the software that runs on a console, such as a command line interface or remote access program. Mac OS X includes both a Terminal program that provides a command line interface, and a Console utility that displays system logs and diagnostic reports. NOTE: A console may also refer to other types of hardware besides computer interfaces. For example, gaming systems such as the PlayStation, Xbox, and Wii, are called video game consoles, while multi-channel audio mixing boards are called mixing consoles. Updated March 12, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/console Copy

Constant

Constant In mathematics, a constant is a specific number or a symbol that is assigned a fixed value. For example, in the equation below, "y" and "x" are variables, while the numbers 2 and 3 are constants. y = 2x - 3 Constants are also used in computer programming to store fixed values. They are typically declared at the top of a source code file or at the beginning of a function. Constants are useful for defining values that are used many times within a function or program. For example, in the C++ code below, min and max are declared as constants. const int min = 10; const int max = 100; By using constants, programmers can modify multiple instances of a value at one time. For example, changing the value assigned to max in the example above will modify the value wherever max is referenced. If the number 100 was used instead of max, the programmer would have to modify each instance of "100" individually. Therefore, it is considered good programming practice to use constants whenever a fixed value is used multiple times. Updated May 3, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/constant Copy

Container

Container A container is a software package that contains everything the software needs to run. This includes the executable program as well as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. The isolated nature of containers provides several benefits. First, the software in a container will run the same in different environments. For example, a container that includes PHP and MySQL can run identically on both a Linux computer and a Windows machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows registry, a container can only modify settings within the container. This makes containers ideal for software testing and development. Containers also eliminate installation issues, including system conflicts, version incompatibilities, and missing dependencies. The result is a "works on all machines" solution, which is ideal for both developers and end users. It also makes the jobs of network administrators easier, since they can deliver containers to a multiple users without having to worry about compatibility issues. Containers vs Virtual Machines Containers are similar to virtual machines (virtualization) since they include everything needed to run in a single package. However, unlike virtual machines (VMs), containers do not include a guest OS. Instead containers run on top of a "container platform," like Docker, which is installed on an operating system. Containers are "lightweight," meaning they require far less disk space than VMs. Additionally, multiple containers can run side-by-side on the same container platform. Updated August 21, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/container Copy

Contextual Menu

Contextual Menu A contextual menu, also known as a "right-click menu" or "context menu," is a menu that appears when a user right-clicks something. The options in the menu are "contextual," because only commands relevant to the clicked-on object are displayed. All modern operating systems with graphical user interfaces (GUIs) include contextual menus as part of their interface, including mobile operating systems. Contextual menus provide a shortcut to many commands that are available elsewhere but are specifically relevant to what you right-clicked on. For example, if you right-click a file on your desktop, you'll see commands that let you open it, rename it, or view its properties. Right-click the desktop itself, and you'll instead see options to change the wallpaper or rearrange desktop icons. Right-click some text in Microsoft Word, and you'll see another set of commands that help you copy and paste text, change font and paragraph settings, or look up synonyms for a word. A contextual menu in macOS when right-clicking a video file While right-clicking is the most common way to open a contextual menu, it's not the only way. On macOS, holding the Control key while clicking something opens a contextual menu. On Windows and Linux, pressing the Menu key on the keyboard opens a contextual menu on the selected object. On both iOS and Android mobile devices, pressing and holding your finger on the touchscreen opens a contextual menu. Updated October 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/contextual_menu Copy

Control Panel

Control Panel The Control Panel is a feature of the Windows operating system that allows the user to modify system settings and controls. It includes several small applications, or control panels, that can be used to view and change hardware or software settings. Some examples of hardware control panels are Display, Keyboard, and Mouse settings. Software control panels include Date and Time, Power Options, Fonts, and Administrative Tools. Many control panels are included as part of the Windows operating system, but others can be installed by third-party applications or utilities. For example, if you add a new mouse to your computer, it may come with a CD for installing a control panel specific for that mouse. Some graphics cards may also install an additional control panel that gives the user greater control over the computer's visual settings. Regardless of when control panels are installed, they can always be found within the Control Panel folder. The Windows Control Panel can be accessed by clicking the Start menu and selecting Control Panel. It is also available in the "Other Places" section of the window's sidebar when you open My Computer. In Windows XP and Windows Vista the Control Panel can be viewed in either Category View or Classic View. Category View arranges the control panels into sections, while Classic View shows them all at once. While the Category View is designed to make locating different settings easier, people familiar with most of the control panels often find the Classic View more efficient. Control Panels were also used for many years by the Mac OS, through Mac OS 9. However, with the introduction of Mac OS X, control panels were consolidated into a single interface called System Preferences. The control panels themselves are now called "Preference Panes" in Mac OS X. They can be accessed by selecting "System Preferences" from the Apple menu or by clicking the System Preferences icon in the Dock. Updated October 18, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/controlpanel Copy

Control Unit

Control Unit A control unit (CU) is an integrated circuit in a processor that controls the input and output. It receives instructions from a program, then passes them to the arithmetic logic unit (ALU). The ALU performs the appropriate calculations and sends the resulting values back to the control unit. The control unit sends these values to the corresponding program as output. A typical control unit is comprised of several logic gates and includes two important components: Program Counter (PC) Instruction Register (IR) The program counter loads individual instructions from memory and stores them sequentially. The instruction register decodes these instructions and converts them to commands for the CPU. After each instruction, the CU increments the program counter and fetches the next instruction. Control units operate at the clock speed of the corresponding CPU. Therefore, the control unit of a 3 GHz processor can handle three billion operations per second. Updated October 31, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/control_unit Copy

Controller Card

Controller Card A controller card, sometimes referred to simply as a "controller," is a computer hardware component that creates an interface between a computer’s main system motherboard and other hardware components. Some controllers will be integrated directly into the motherboard, while others may be added on as expansion cards. Motherboards have several types of controllers built in as dedicated chips. These controllers are necessary for integrated components to communicate with each other. For example, the memory and storage controllers allow other parts of the computer access to the RAM and disk drives. Built-in graphics and audio controllers create the interface between the motherboard and the video and audio ports, allowing you to connect monitors and audio devices. A network controller will create the interface between the motherboard and a built-in Ethernet port. USB and Thunderbolt controllers allow the motherboard to communicate with devices plugged into those ports. HP Thunderbolt 3 PCIe controller card Even though most motherboards have most necessary controllers already integrated, you can add controller cards that add extra ports and new connectivity options. Controller cards plug into one of a computer’s expansion slots, typically a PCIe slot. For example, you can add more USB ports to a computer by adding a USB controller card or add the capacity for a RAID storage array by adding a disk array controller. Updated September 21, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/controller_card Copy

Cookie

Cookie A cookie is a small amount of data generated by a website and saved by your web browser. Its purpose is to remember information about you, similar to a preference file created by a software application. While cookies serve many functions, their most common purpose is to store login information for a specific site. Some sites will save both your username and password in a cookie, while others will only save your username. Whenever you check a box that says, "Remember me on this computer," the website will generate a login cookie once you successfully log in. Each time you revisit the website, you may only need to enter your password or you might not need to log in at all. Cookies are also used to store user preferences for a specific site. For example, a search engine may store your search settings in a cookie. A news website may use a cookie to save a custom text size you select for viewing news articles. Financial websites sometimes use cookies to store recently viewed stock quotes. If a website needs to store a lot of personal information, it may use a cookie to remember who you are, but will load the information from the web server. This method, called "server side" storage, is often used when you create an account on a website. Browser cookies come in two different flavors: "session" and "persistent." Session cookies are temporary and are deleted when the browser is closed. These types of cookies are often used by e-commerce sites to store items placed in your shopping cart, and can serve many other purposes as well. Persistent cookies are designed to store data for an extended period of time. Each persistent cookie is created with an expiration date, which may be anywhere from a few days to several years in the future. Once the expiration date is reached, the cookie is automatically deleted. Persistent cookies are what allow websites to "remember you" for two weeks, one month, or any other amount of time. Most web browsers save all cookies in a single file. This file is located in a different directory for each browser and is not meant to be opened manually. Fortunately, most browsers allow you to view your cookies in the browser preferences, typically within the "Privacy" or "Security" tab. Some browsers allow you to delete specific cookies or even prevent cookies from being created. While disallowing cookies in your browser may provide a higher level of privacy, it is not recommended since many websites require cookies to function properly. NOTE: Since cookies are stored in a different location for each web browser, if you switch browsers, new cookies will need to be created. Updated July 9, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cookie Copy

Copilot

Copilot Microsoft Copilot is an assistive AI tool that answers questions and provides suggestions via a conversational chat interface. It can also summarize documents, write new content, and generate custom images based on user input. The digital companion helps with everyday tasks, including personal and work-related ones. For example, it can give you dinner recipe ideas or tips for improving your golf swing. If you need help writing a resume or cover letter, Copilot can produce a template to get you started. Microsoft Copilot is "your everyday AI companion" Technology Copilot uses several generative AI models to process user input. Below are three foundational technologies as of June 2024: Microsoft Prometheus - Microsoft's proprietary large language model (LLM) GPT-4 (Generative Pre-Trained Transformers version 4) - the same models used by OpenAI's ChatGPT DALL-E 3 - an AI model that generates and modifies images from textual input Like other generative AI services, Microsoft Copilot is a tool that "learns" over time. It utilizes machine learning to enhance its responses based on user feedback and follow-up questions, helping it adapt to users' needs. Availability Copilot is integrated into Windows 11 and Microsoft 365 applications. It is also available as a standalone app and on the web at copilot.microsoft.com. When using the Bing search engine, you may see Copilot suggestions alongside the search results. The standard version of Copilot is free to use. If you want priority access to the latest models and faster responses, Copilot Pro is available as a monthly subscription. Updated June 20, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/copilot Copy

Copy

Copy To copy text or data is to create a duplicate in the system clipboard. Once you've copied something, you can paste it into another part of the same document, another document in the same program, or another program entirely. Copying data is similar to cutting it, but cutting removes the data from its original spot, and copying it leaves it in place. You can select the Copy command from a program's Edit menu or use the keyboard shortcut Control + C (on Windows) / Command + C (on macOS). The Copy, Cut, and Paste commands are some of the earliest interface metaphors that came with graphical user interfaces. They are meant to remind computer users of physical interactions with paper documents — copying a page of paper on a photocopier to create duplicates while preserving the original, destructively cutting text out of a page using scissors or a knife, and placing either the cut original or copied duplicate onto another page using paste. Thankfully, copying text in a digital document is much easier through menu commands and keyboard shortcuts. The Copy button on the Microsoft Word ribbon While a piece of copied data is on the system clipboard, you can paste it anywhere that can accept that data type. For example, you can paste text anywhere you can enter text, or paste image data in any image editor or desktop publishing app. You can also copy and paste files in your file browser to create duplicates in another folder or on another disk. You can paste the same data multiple times since pasting does not remove data from the clipboard. However, copying something new replaces the contents of the clipboard. Some operating systems allow you to go back through the clipboard's history to recover something you had copied earlier, but this requires extra steps and not an easy keyboard shortcut. Updated December 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/copy Copy

Copyright

Copyright Copyright is a legal means of protecting an author's work. It is a type of intellectual property that provides exclusive publication, distribution, and usage rights for the author. This means whatever content the author created cannot be used or published by anyone else without the consent of the author. The length of copyright protection may vary from country to country, but it usually lasts for the life of the author plus 50 to 100 years. Many different types of content can be protected by copyright. Examples include books, poems, plays, songs, films, and artwork. In modern times, copyright protection has been extended to websites and other online content. Therefore, any original content published on the Web is protected by copyright law. This is important in the digital age we live in, since large amounts of content can be easily copied and pasted. So how do you obtain copyright protection? Fortunately, in most countries, copyright protection is automatic. This means whenever you publish original content, it is automatically protected by copyright law. For example, if you post a blog on the Internet, your content is automatically covered by copyright. In most cases, this type of copyright protection is all that is necessary. However, if you want others to know your content is copyright protected, you can post the copyright logo (©) next to your name on any Web pages that include your original content. You may also want to include the years you have owned the content. Below is an example of a copyright line: Copyright © 2007-2009 [your name]. In situations where it is critical to protect an author's rights, many countries provide copyright registration, which allows authors to register copyrighted content with a central agency. This makes it easier to prove ownership of content if it is ever disputed. Copyright provides a helpful means of protecting original content. It serves to give people credit for the work they do, which is something we can all appreciate. Therefore, if you ever consider copying someone else's content, think of how it would make you feel if someone copied your original work and published it as their own. If you ever would like to use another person's content, make sure to ask the author for permission first. And always give credit where credit is due. Updated September 8, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/copyright Copy

Copyright Infringement

Copyright Infringement A copyright infringement is a violation of an individual or organization's copyright. It describes the unauthorized use of copyrighted material, such as text, photos, videos, music, software, and other original content. In the computer industry, copyright violations often refer to software programs and websites. Software is usually distributed under a certain type of license agreement, or SLA. This license defines the terms of use, including whether or not the software can be distributed to other users. For example, open source programs are free to use and may often be redistributed without limitations. Commercial software, however, must be purchased and cannot be redistributed. Using commercial software without paying for it is a copyright infringement and as commonly known as piracy. Websites that contain original content are automatically protected under international copyright law. In other words, you cannot copy the content of one website and publish it on another site without the author's permission. Reposting text, images, videos, audio clips, or any other content found on the Web without permission constitutes a copyright infringement. Legal penalties for violating website copyrights depend on the extent and damages caused by the copyright violation. Since we live in a digital age, copying content is often as simple as a copy and paste operation. This makes it possible for one person to copy and republish content in a few minutes that may have taken another person several years to create. Therefore, In 1998, the United States Congress took steps to defend intellectual property by passing the Digital Millennium Copyright Act, commonly known as DMCA. This law defines specific types of digital copyright infringements and establishes serious penalties for violators. NOTE: If you ever want to use or republish content from the Web, you should always ask the author for permission and include an appropriate reference and link to the original content. Updated November 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/copyright_infringement Copy

CORS

CORS Stands for "Cross-Origin Resource Sharing." CORS allows scripts on webpages to request resources from other domains. Most web browsers block these types of requests by default for security purposes. A webpage can request resources from another domain — as long as the requests come from the HTML. For example, the section may reference resources, such as CSS files, fonts, and JS files other domains. Examples include Google Analytics scripts, jQuery libraries, and fonts hosted on another server. Similarly, the can request images from a CDN or other domain. Cross-origin resource requests in the HTML do not require CORS permissions. When a script or iframe element makes a cross-origin request, CORS is required. For example, an AJAX method – which runs after the page is loaded – cannot request a resource from another domain. CORS overrides this default browser setting and allows the request to go through. CORS is implemented using "access control" HTTP headers. A server admin can add or modify the response headers, which are sent to a client's browser when a webpage is accessed. These settings, which can be applied to Apache and IIS servers, may be site-specific or server-wide. Below are common request and response headers: CORS Request Headers: Origin Access-Control-Request-Method Access-Control-Request-Headers CORS Response Headers: Access-Control-Allow-Origin Access-Control-Allow-Methods Access-Control-Expose-Headers CORS Example If a script on techterms.com requests a resource from sharpened.com using a GET action, it may send the following request headers: Origin: https://techterms.com Access-Control-Request-Method: GET To allow the request, sharpened.com may respond with the following headers: Access-Control-Allow-Origin: https://techterms.com Access-Control-Allow-Methods: GET Access-Control-Allow-Origin can be set to specific domains or a wildcard using an asterisk (*). The wildcard setting allows cross-resource requests from all domains, which may be a security risk. Access-Control-Allow-Methods can be set to PUT, POST, DELETE, and others, including a wildcard (*) setting that allows all methods. Updated November 3, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cors Copy

CPA

CPA Stands for "Cost Per Action." CPA is a performance-based online advertising model where advertisers pay publishers a fixed amount whenever a user completes a specified action. Unlike other models that pay for clicks or impressions, CPA focuses on conversions. The user must not only click on an ad but also complete a predetermined action, such as making a purchase, filling out a form, or downloading an app. For example, if a website visitor clicks an ad on a website, a click is recorded but not an action (or "conversion"). If the user takes the next step and completes a purchase, fills out a form, or downloads an app, the advertiser pays the publisher according to the agreed CPA rate. Generally, commissions for sales are higher than CPA rates for downloads and form submissions. CPA vs Other Advertising Models CPA is often compared to CPL (Cost Per Lead), but while CPL specifically refers to generating leads (like form submissions), CPA is broader and can include any predefined action. CPA may be risker for publishers than CPC (Cost Per Click) or CPM (Cost Per Thousand Impressions) since payment is contingent on users completing the desired action, which may not always happen even if a user clicks an ad. CPA is considered more cost-effective for advertisers since they only pay for tangible outcomes, making it ideal for businesses focused on ROI (Return on Investment). Updated August 27, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cpa Copy

CPC

CPC Stands for "Cost Per Click." CPC is a term used in online marketing that refers to how much an advertiser pays a website or ad network each time someone clicks one of their ads. It applies to ads displayed on search engine results pages, websites, social media, or anywhere else online. CPC is also known as PPC (Pay Per Click). When advertisers want to run an ad online, ad networks have them bid against other advertisers by setting a maximum CPC. The maximum CPC is part of a formula that looks at the keywords and demographics targeted, the advertiser's industry, how relevant the ad is, and when the ad runs. After the formula selects which ad to run, the ad network charges the advertiser just enough to win the bid, often less than the maximum CPC. The goal of CPC bidding, when compared to CPM (Cost Per Mille) bidding, is the focus on click-through rate, or CTR. Advertisers that use CPC bidding want people to click their ad to visit their website, often to sell a product or service. Advertisers using CPM bidding may instead just want their ads seen as widely as possible to build brand awareness without caring whether people click the ads. Updated October 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cpc Copy

CPL

CPL Stands for "Cost Per Lead." CPL is a term used in online marketing that refers to how much an advertiser pays a website or ad network each time they acquire a lead through an advertising campaign. This advertising model is also called "online lead generation." Once the ad campaign gathers leads, further marketing efforts attempt to convert those leads into customers. Advertisers running a campaign using a CPL pricing model tend to have a different focus than those using a CPC (Cost per Click) or CPM (Cost per Mille) model. Instead of paying for people to click or view an ad, a CPL model pays for leads — someone who has chosen to sign up to receive more information or express interest. First, an online ad sends a potential lead to the ad campaign's landing page. From there, they explicitly choose to share their information and express interest by the method set by the ad campaign — they may sign up for an email newsletter, a rewards program, a free trial, or otherwise share their personal information and express interest. These advertising campaigns tend to cost more, but the advertiser ends up with a more-valuable pool of leads. Leads who choose to sign up are more likely to become paying customers and even repeat customers. Updated November 4, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cpl Copy

CPM

CPM Stands for "Cost Per Mille." CPM, or "Cost Per Thousand Impressions," is a metric used in online advertising to define how much an advertiser pays for every 1,000 impressions of an ad. An impression is registered each time an ad is displayed, whether it's a banner ad, video, or other form of digital promotion. CPM is one of several key pricing models in digital advertising. Unlike models such as Cost Per Click (CPC) or Cost Per Lead (CPL) — where advertisers pay for user interactions (clicks or conversions) — CPM focuses on exposure. This makes it particularly useful for brand awareness campaigns, which aim to reach as many people as possible rather than driving immediate user actions. While CPM pricing is widely used in digital advertising, many advertisers also track metrics like Click-Through Rate (CTR) and conversion rate to assess the effectiveness of an ad beyond just impressions. Higher visibility (through CPM) can increase the likelihood of conversions, but it's easier to measure the return on ad spend (ROAS) using performance-based metrics like PPC or CPL. NOTE: Mille is Latin for "one thousand," which is where the term "CPM" comes from. Technically, CPM should be (RPM) or Revenue Per Mille from the publisher's side, but advertisers and publishers often use CPM and RPM interchangeably. Some advertising platforms display revenue per thousand impressions as eCPM (effective CPM) instead of RPM. Updated September 5, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cpm Copy

CPS

CPS Stands for "Classroom Performance System." CPS refers to a technology system that is used for educational purposes within a classroom. It includes both hardware and software that works together to create a modern, interactive learning environment for students. A typical CPS classroom setup includes a projector, a CPS Chalkboard, a computer running CPS software, and response pads that are given to the students. In a CPS-enabled classroom, the teacher can use a handheld CPS Chalkboard to give lessons or tests that are visually displayed on the projector. The students can interact with the lesson or test by using the response pads, which are similar to remote controls. Each response pad has several buttons (i.e. A through H), which can be used to answer test questions displayed on the projector in real-time. This allows each student to answer every question in a more systematic way than everyone yelling out the answer at one time. CPS has been shown to improve kids' learning capabilities and retention of subject matter. Because each student is able to interact with every aspect of the lesson, CPS can significantly increase students' interest in the subject matter. Considering how difficult it can be to keep kids' attention these days, many teachers may find CPS to be a welcome new technology. Updated May 31, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cps Copy

CPU

CPU Stands for "Central Processing Unit." The CPU is the primary component of a computer that processes instructions. It runs the operating system and applications, constantly receiving input from the user or active software programs. It processes the data and produces output, which may be stored by an application or displayed on the screen. The CPU contains at least one processor, which is the actual chip inside the CPU that performs calculations. For many years, most CPUs only had one processor, but now it is common for a single CPU to have at least two processors or "processing cores." A CPU with two processing cores is called a dual-core CPU and models with four cores are called quad-core CPUs. High-end CPUs may have six (hexa-core) or even eight (octo-core) processors. A computer may also have more than one CPU, which each have multiple cores. For example, a server with two hexa-core CPUs has a total of 12 processors. While processor architectures differ between models, each processor within a CPU typically has its own ALU, FPU, register, and L1 cache. In some cases, individual processing cores may have their own L2 cache, though they can also share the same L2 cache. A single frontside bus routes data between the CPU and the system memory. NOTE: The terms "CPU" and "processor" are often used interchangeably. Some technical diagrams even label individual processors as CPUs. While this verbiage is not incorrect, it is more accurate (and less confusing) to describe each processing unit as a CPU, while each processor within a CPU is a processing core. Updated July 11, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cpu Copy

Crash

Crash In computing, a crash is an unexpected termination of a process. Crashes can happen to individual applications as well as the operating system itself. Some crashes produce error messages, while other crashes may cause a program or the entire system to hang or freeze. When a crash occurs, it can cause a number of different problems. If you are typing a document in a word processor, for example, you may lose the changes you made since you last saved the document. If you are copying or downloading a file, the crash may produce a corrupt file. In some cases, a crash can even cause errors in the file system of your storage device. Since crashes create a negative user experience, software developers aim to create stable programs that do not crash. This involves removing bugs that produce undefined or infinite calculations and eliminating memory leaks. Some programs are even designed to handle crashes gracefully, generating detailed error messages if a "fatal error" occurs. Many productivity applications save your work periodically so you can recover your work after a crash. With older computers (before the turn of the century), when a program crashed, it often took down the whole system with it. This made restarting the computer a frequent task. Modern operating systems support multi-threading, which enables programs to run independently of each other. This allows the operating system and other programs to keep running when an application crashes. If the operating system itself crashes, however, you may still be forced to restart your computer. Updated June 19, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/crash Copy

CRM

CRM Stands for Customer Relationship Management. CRM refers to strategies, technologies, and practices businesses use to manage customer interactions and data. The goal is to enhance customer relationships, increase customer retention, and drive sales growth. In the current digital age, establishing and maintaining meaningful customer relationships can be difficult. This is especially true in e-commerce, where personal interactions are limited. CRM systems play a pivotal role in enhancing customer relationships through personalized communication and exceptional customer service. Customer Relationship Management solutions integrate various functions, such as sales, marketing, customer service, and support. By leveraging data analytics, businesses can gain valuable insights into customer behavior, preferences, and needs. These insights enable them to tailor their offerings and interactions to individual customers and clients. While analytics are helpful, effective CRM requires understanding and anticipating customers' needs. Even the most advanced tools can't replace genuine customer service. Updated July 11, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/crm Copy

Cron

Cron Cron is a job-scheduling system utility on Unix and Unix-like operating systems. Users can schedule tasks, known as "cron jobs," to run automatically at set intervals. Most Unix users schedule cron jobs to carry out system maintenance tasks, like deleting log files and creating backups, but a user can schedule any command or script they find useful. A user can schedule a cron job by creating a "crontab" file. A crontab file, or "cron table," specifies the command that will run and when to run it. A crontab file can contain a single cron job or multiple cron jobs. Each entry in a crontab has five variable fields that represent when to run the command, as well as one to specify the command itself: 1 2 3 4 5 * * * * * The minute (0-59) The hour (0-23) The day of the month (1-31) The month (1-12) The day of the week (0-6, Sunday to Saturday) Leaving an asterisk in place of a number runs the cron job at every instance of that variable. This allows the user to easily set a cron job to run every set number of minutes, hours, days, or months as necessary. Leaving every asterisk in place will run the cron job every minute. For example, the following cron job will echo the phrase "hello world" at 3:30 AM on the first day of every month: 30 3 1 * * echo hello world Updated November 14, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cron Copy

Crop

Crop To crop an image is to cut off part of the image along one or more sides. Photographers often crop an image to remove some unnecessary detail from the edge of a photo or to refine the photo's artistic composition. A crop tool is one of the most common features found in all photo editing applications. Cropping a photo after taking it is a common step for professional and amateur photographers alike. While the most common use for cropping is to remove unwanted details along the edge, it is also a helpful artistic tool. You can crop an image to adjust its composition and align the subjects in the frame according to the rule of thirds. You can alter an image's aspect ratio slightly to make it taller or wider, or switch between landscape and portrait orientations entirely. You can also use the crop tool to zoom in on a photo's subject, increasing its apparent size and emphasizing its importance. Cropping a photo in Apple Photos to change its aspect ratio and emphasize the subject When you enable a photo editing application's crop tool, you have several ways to crop. You can use the mouse to draw a freeform rectangle anywhere in the image to draw new borders, or make a slight adjustment by moving the original borders. You can also lock the crop tool to a specific aspect ratio to keep it uniform with other images. Cropping an image discards the cropped-out pixels entirely. You cannot restore this discarded information later, so you may want to create a backup of the image file before cropping. Since cropping removes pixels without adding new data, a cropped image will be smaller than the original. For example, if you start with a 12-megapixel photo and crop out a third of it, you're left with an 8-megapixel image. If you print the original and the cropped versions at the same DPI, the cropped version will be physically smaller; if you print them so that they're the same physical size, the cropped version may appear blurrier since it's stretching fewer pixels out over the same page size. NOTE: Some photo editors like Apple Photos automatically preserve the original image before any edits as a separate data fork in the image file, allowing you to restore the original if you don't like the changes you've made. Updated October 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/crop Copy

Cross-Browser

Cross-Browser When a software program is developed for multiple computer platforms, it is called a crossplatform program. Similarly, when a website is developed for multiple browsers, it is called a cross-browser website. The job of a Web developer would be much easier if all browsers were the same. While most browsers are similar in both design and function, they often have several small differences in the way they recognize and display websites. For example, Apple's Safari uses a different HTML rendering engines than Internet Explorer. This means the browsers may display the same Web page with slightly different page and text formatting. Since not all browsers support the same HTML tags, some formatting may not be recognized at all in an incompatible Web browser. Furthermore, browsers interpret JavaScript code differently, which means a script may work fine in one browser, but not in another. Because of the differences in the way Web browsers interpret HTML and JavaScript, Web developers must test and adapt their sites to work with multiple browsers. For example, if a certain page looks fine in Firefox, but does not show up correctly in Internet Explorer, the developer may change the formatting so that it works with Internet Explorer. Of course, the page may then appear differently in Firefox. The easiest fix for browser incompatibility problems is to use a more basic coding technique that works in both browsers. However, if this solution is not possible, the developer may need to add code that detects the type of browser, then outputs custom HTML or JavaScript for that browser. Making a cross-browser site is usually pretty simple for basic websites. However, complex sites with a lot of HTML formatting and JavaScript may require significant extra coding in order to be compatible with multiple browsers. Some developers may even generate completely different pages for each browser. While CSS formatting has helped standardize the appearance of Web pages across multiple browsers, there are still several inconsistencies between Web browsers. Therefore, cross-browser design continues to be a necessary aspect of Web development. Updated January 2, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/crossbrowser Copy

Cross-Platform

Cross-Platform Cross-platform software is software that is available for more than one computer platform. Each version of a cross-platform application may have slight differences in the user interface and feature set, but they are fundamentally the same program capable of opening the same files and performing the same functions. The opposite of a cross-platform app is a "native" app, developed exclusively for a single platform. Some cross-platform apps are available for both desktop and mobile platforms. In these cases, the mobile version of the app may have a limited feature set, but is compatible with the same documents and capable of most of its functions. For example, Microsoft Word is a cross-platform app with a full-featured desktop version available for Windows and macOS; it also offers a mobile version for iOS and Android and a web app that can run in a browser. Microsoft Word in Windows 11 and macOS Historically, each version of a cross-platform application required separate source code using each platform's native programming languages and APIs. However, modern cross-platform app frameworks and libraries help developers create apps using a single code base capable of running on multiple platforms. For example, the Electron app framework uses web technologies like HTML, CSS, and JavaScript to allow apps to run on desktop and mobile platforms. NOTE: Hardware peripherals compatible with multiple platforms, like USB mice and keyboards, may also be considered cross-platform. Updated October 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/crossplatform Copy

CRT

CRT Stands for "Cathode Ray Tube." CRT displays were a type of analog computer monitors and televisions commonly used up through the mid-2000s. A CRT display operates by firing electrons from the back of the vacuum tube towards an array of phosphor dots that coat the inside of the glass screen. These phosphor dots — arranged in stripes of red, green, and blue (RGB) — glow when hit by an electron to produce an image. Physically, CRT monitors were bulky and heavy compared to modern LCDs. The vacuum tube used in a CRT monitor required thick, lead-coated glass to hold up to the pressure; larger monitors required thicker glass, making them even heavier. A 17" CRT monitor was one of the most common sizes and usually weighed around 35 or 40 lbs (15-18 kg). The length required for the vacuum tube to operate meant that most CRT monitors were deeper than they were wide or tall, and the shape of the tube meant that they were very front-heavy. A ViewSonic CRT monitor Despite being completely replaced in the market by flat-panel LCDs, CRT monitors had some advantages over LCDs during the time they were both available. CRT monitors were able to reproduce colors more accurately than early LCDs. They did not have a native resolution, allowing you to switch between resolutions without any loss of image quality. They also produced an image faster with less input delay. Updated December 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/crt Copy

Cryptocurrency

Cryptocurrency A cryptocurrency is a software token that can be transferred securely and verifiably from one person to another. Cryptocurrency transactions are recorded on a public distributed database, called a blockchain, that uses cryptographic verification methods to authenticate each transaction. Cryptocurrencies are decentralized, not controlled by a central bank or other authority, and have no intrinsic value besides their scarcity. Thousands of separate cryptocurrencies were created during the cryptocurrency boom of the late 2010s and early 2020s, inspired by the success of early cryptocurrencies like Bitcoin and Ethereum. Each cryptocurrency has its own independent blockchain that records its entire history of transactions. Distributed groups of computers, often using specialized hardware and software, host the blockchain and process new transactions by appending data to it; those computers receive newly-created tokens as a reward for their work. Cryptocurrency owners store their tokens in a secure, encrypted application known as a wallet. A wallet includes a public address, similar to a bank account number, used to send and receive tokens. It also uses a cryptographic secret key to unlock and access the wallet. For convenience, many people store their cryptocurrency on exchanges, which maintain online wallets and provide services to exchange tokens for money or other tokens. A cryptocurrency blockchain processes transactions through either mining or staking. Mining involves groups of computers solving cryptographic puzzles while processing transactions and appending those transactions to the blockchain. Staking involves less computing power, requiring participants to lock up some of their tokens to prove their stake in the blockchain instead of solving puzzles. Bitcoin is the most popular mined cryptocurrency, and Ethereum is the most popular staked token. The Risks of Cryptocurrency Despite the name, cryptocurrencies do not generally function as currency. Cryptocurrencies have seen rapid swings in value from day to day that frequently change a token's purchasing power. Because of this volatility, few vendors accept Bitcoin directly, let alone smaller cryptocurrency tokens; most transactions first convert a cryptocurrency to a stable fiat currency like dollars or euros. It is much more common for enthusiasts to purchase tokens as a speculative investment, hoping that the token's value increases so that they may sell it later at a profit. Cryptocurrency's anonymity and potentially high value make people's wallets tempting targets for fraud. Both locally-stored wallets and those hosted on exchanges are common targets. In addition to direct intrusion attempts, phishing emails and other scams are frequently used to trick people into granting access to their tokens. Ransomware hackers attack computer systems and hold files for a cryptocurrency ransom. Unfortunately, once a victim's cryptocurrency is in the attacker's wallet and the blockchain has recorded the transaction, it cannot be recovered. Updated July 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cryptocurrency Copy

Cryptography

Cryptography Cryptography is the science of protecting information by transforming it into a secure format. This process, called encryption, has been used for centuries to prevent handwritten messages from being read by unintended recipients. Today, cryptography is used to protect digital data. It is a division of computer science that focuses on transforming data into formats that cannot be recognized by unauthorized users. An example of basic cryptography is a encrypted message in which letters are replaced with other characters. To decode the encrypted contents, you would need a grid or table that defines how the letters are transposed. For example, the translation grid below could be used to decode "1234125678906" as "techterms.com". 1t 6m 2e 7s 3c 8. 4h 9c 5r 0o The above table is also called a cipher. Ciphers can be simple translation codes, such as the example above, or complex algorithms. While simple codes sufficed for encoding handwritten notes, computers can easily break, or figure out, these types of codes. Because computers can process billions of calculations per second, they can even break complex algorithms in a matter of seconds. Therefore, modern cryptography involves developing encryption methods that are difficult for even supercomputers to break. Updated July 15, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cryptography Copy

CSS

CSS Stands for "Cascading Style Sheet." CSS is a style sheet language used for formatting content in HTML webpages. CSS style sheets can define the appearance and formatting of text, tables, and other elements separately from the content itself. Styles may be found within a webpage's HTML file or in a separate document referenced by multiple webpages. CSS helps web developers create a uniform look across an entire website. Instead of formatting the appearance of each table and block of text in a webpage's HTML code, a style is defined once in a CSS style sheet. Common HTML formatting tags, like , , and can have custom formatting defined in a CSS file; custom styles can also be created and applied to text, images, and tables. Once a style is defined, it can be used by any page that links to the CSS file. By separating a webpage's content from its formatting, CSS makes it easy to update styles across several pages at once. For example, if you want to increase the body text size from 10pt to 12pt across dozens of separate HTML pages, you only need to change the style once in the CSS file. The text size changes for every instance of that style on any webpage using that style sheet. Some web browsers include a reader mode that automatically changes text formatting from the webpage's default to a special built-in stylesheet optimized for easy reading. NOTE: The word "cascade" refers to the priority scheme used by CSS when multiple style rules overlap. User-defined CSS overrules styles applied directly to an HTML tag, which overrule any styles defined within the HTML document's header, which overrule any styles defined in an external CSS file. Updated March 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/css Copy

CSV

CSV Stands for "Comma-Separated Values." CSV is a standard way to store structured data in a plain text format. It is a common export option in desktop applications and some websites. Most spreadsheet programs can import data from a .CSV file. A list of comma-separated values represents a table. Each line in a CSV file is a row and the value between each comma is a column. The first row is often a table header containing labels for each column. Since the data in a CSV file represents a table, each line should have the same number of comma-separated values. CSV Format Since CSV files use the comma character "," to separate columns, values that contain commas must be handled as a special case. These fields are wrapped within double quotation marks. The first double quote signifies the beginning of the column data, and the last double quote marks the end. If the value contains a string with double quotes, these are replaced by two double quotes, or "". Below are some standard CSV formatting rules: Table data is represented using only plain text. The first line may or may not represent the table header. Rows are separted by line breaks (newline characters). Columns (fields) are separated by commas. All lines contain the same number of values. Fields that contain commas must begin and end with double quotes. Fields that contain line breaks must begin and end with double quotes (not all programs support values with line breaks). All other fields do not require double quotes. Double quotes within values are represented by two contiguous double quotes. Example CSV Data The following three lines represent a table with three rows and four columns, starting with a table header. ID,Website,URL,Description 1,TechTerms,https://techterms.com,"TechTerms, the Computer Dictionary" 2,Slangit,https://slangit.com,"Slangit, the ""Clean Slang"" Dictionary" The data above may be displayed in a table like this: IDWebsiteURLDescription 1TechTermshttps://techterms.comTechTerms, the Computer Dictionary 2Slangithttps://slangit.comSlangit, the "Clean Slang" Dictionary Using CSV Files A *.csv file can be opened, viewed, and edited in any text editor. However, because of the minimalistic nature of the CSV format, the data is often difficult to read. Therefore, it is best to view CSV files in a spreadsheet program like Microsoft Excel or Apple Numbers. These applications parse the comma-delimited data and display the values as a table within a spreadsheet. File extension: .CSV Updated July 18, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/csv Copy

CTP

CTP Stands for "Composite Theoretical Performance." CTP is used to measure the performance of computer processors. The values returned by CTP calculations are used for benchmarking purposes, which compare the performance of different processors. For example, Intel and AMD use CTP calculations to measure how many millions of theoretical operations per second (MTOPS) their processors can perform. An Intel Pentium M 770, which runs at 2.13 GHz, has a CTP of 7100 MTOPs, while an AMD Opteron 146, which runs at 2.0 GHz, has a CTP of 7168 MTOPS. As seen in the example above, faster processor speeds do not always result in a higher CTP. Other considerations, such as the processor's architecture and the speed of the frontside bus also affect the overall performance. CTP is useful for comparing different brands of processors, as well as comparing different models of processors made by the same company. With dual and quad processors becoming more prevalent, CTP is now also being used to measure the performance increase when multiple processors are used together. CTP also stands for "Computer to Plate." This is a process where printing plates are made without using costly film. Instead, the images are sent directly to the plate from the computer. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ctp Copy

CTR

CTR Stands for "Click-Through Rate." CTR is a term used in online marketing that measures the percentage of times an ad gets clicked out of the times it appears. It is one measurement of the success of an advertisement in an online marketing campaign using a Pay Per Click (PPC) model. Online marketers track the CTR for a campaign as a whole, as well as for each individual ad and keyword. This allows advertisers to see which ads are most effective, then tailor their ads for the most effective messaging. Search engines that show advertisements often take an ad's CTR into account when determining its relevancy and overall quality score. An ad with a high quality score will appear more often and cost less on a Cost Per Click (CPC) basis, which incentivizes advertisers to continually improve the CTR of their ads. The number that is considered "good" for a CTR depends on the context of the ad. Most display and banner ads have a CTR of under 1%, social media ads are typically between 1-2%, and a good search keyword CTR is around 3-5%. CTR is also a metric tracked in email marketing, measuring the percentage of readers that click a link in an email. Other measurements for email marketing include the open rate (what percentage of people who received an email opened it) and the bounce rate (what percentage of emails sent were not delivered). Updated November 10, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ctr Copy

CTV

CTV Stands for "Connected Television." CTV refers to any television that can connect to the Internet and stream video content. The term encompasses Smart TVs, which can connect directly to the Internet over Wi-Fi or ethernet, and any television that uses an Internet-connected streaming device like an Apple TV, Amazon FireTV stick, or Roku. A CTV can still connect to an antenna or set-top box to access traditional over-the-air, cable, and satellite TV but has additional options to access live and on-demand streaming video using built-in and third-party apps like Netflix, Hulu, and YouTube TV. Most televisions sold now (as of 2023) are CTVs. CTV is often used synonymously with another term, OTT (Over-The-Top). OTT refers to Internet-delivered video content accessible by Smart TVs, streaming devices, and other devices like desktop and laptop computers, tablets, and smartphones. As an increasing share of households drop cable and satellite TV services, they're accessing shows using OTT services on their CTVs instead. A Vizio D-Series Smart TV is an example of a Connected TV The advertising industry first introduced the term "CTV" to refer to a new advertising method in contrast to traditional linear broadcast television. Advertising to CTV devices adds several benefits to advertisers that were not possible before. For example, each video stream is delivered independently to each device, allowing the streamer to show different ads to each viewer targeted to their specific demographics and interests. They may also track a viewer's habits across devices using information like shared IP addresses and user accounts. Advertisers can also place ad buys programmatically and in real-time (instead of showing the same ad each time a show is streamed), which is more like how ads are placed on websites instead of traditional television. Updated December 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ctv Copy

CUDA

CUDA Stands for "Compute Unified Device Architecture." CUDA is a parallel computing platform developed by NVIDIA and introduced in 2006. It enables software programs to perform calculations using both the CPU and GPU. By sharing the processing load with the GPU (instead of only using the CPU), CUDA-enabled programs can achieve significant increases in performance. CUDA is one of the most widely used GPGPU (General-Purpose computation on Graphics Processing Units) platforms. Unlike OpenCL, another popular GPGPU platform, CUDA is proprietary and only runs on NVIDIA graphics hardware. However, most CUDA-enabled video cards also support OpenCL, so programmers can choose to write code for either platform when developing applications for NVIDIA hardware. While CUDA only supports NVIDIA hardware, it can be used with several different programming languages. For example, NVIDIA provides APIs and compilers for C and C++, Fortran, and Python. The CUDA Toolkit, a development environment for C/C++ developers, is available for Windows, OS X, and Linux. Updated July 3, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cuda Copy

Cursor

Cursor A cursor is a movable icon on a computer screen that indicates where user input will take place. The two most common cursor types are the mouse cursor (which selects objects and clicks buttons) and the text cursor (which shows where the next typed character appears). Both cursors are controlled by the computer's mouse or trackpad, and the text cursor may also be controlled by the keyboard. In a computer with a graphical user interface, the mouse cursor is the primary way for you to interact with things on the screen. As you move the mouse across a surface (or your finger across a trackpad), the mouse cursor moves with it. You can issue commands by moving the mouse cursor onto an object and clicking the mouse button. The mouse cursor is typically an arrow, with its point indicating the exact spot that you'll interact with. However, the cursor's shape often changes to reflect a special interaction. For example, a mouse cursor changes to a hand icon when it hovers over a hyperlink on a webpage, or to a double-ended arrow when you can click and drag a window's border to resize it. Some applications, particular image editors, change the cursor's appearance when a tool is active to help you know what will happen when you click the mouse. For example, a crosshair icon indicates a rectangular selection, while a magnifying glass means you will zoom in when you click somewhere. The text cursor, also known as the insertion point, is a special cursor that appears in text fields and documents to show where newly typed characters will appear. It is typically a blinking vertical line, although sometimes it may be a horizontal line just below the text baseline. You can reposition the insertion point by clicking elsewhere in the text field, or by using the arrow keys to move it left, right, up, or down. In most applications, the mouse cursor disappears once you start typing to reduce distractions, but it will reappear once you move the mouse. Updated October 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/cursor Copy

Cut

Cut Cut is a command that allows you to "cut out" a selection of text or other data and save it to the clipboard. It is similar to the Copy command, but instead of just copying the data to the clipboard, it removes the selected data at the same time. For example, if you highlight a sentence in a word processing program and select "Cut," the sentence is removed from the document and is added to the clipboard. Therefore, selecting "Cut" is an simple way to both copy and delete a selection in a single step. Objects that can be cut include text, images, audio and video segments, and other types of data. However, you may notice that some selections can only be copied and not cut. This is typically because the object is not editable and therefore can only be copied. If you cut a selection from a document, it can be pasted into another similar document. However, if you copy something else to the clipboard before pasting the selection, the cut data may be lost. Like the Copy and Paste commands, the Cut command is located in the Edit menu of most programs. The standard shortcut for "Cut" is Control-X in Windows and Command-X in Mac OS X. (If you're wondering why it isn't the "C" key, that shortcut is reserved for the Copy command.) Therefore, you can tell if a program supports the Cut command by clicking the Edit menu and looking for the Cut command. Updated February 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cut Copy

CVE

CVE Stands for "Common Vulnerabilities and Exposures." CVE is a system that provides unique identifiers for publicly known cybersecurity vulnerabilities. It operates like an extensive catalog, indexing security risks in software and assigning a distinct identifier to each, thereby facilitating standardized discussions, assessments, and management of these vulnerabilities across various platforms. The core principle of CVE is to establish a unified understanding of software vulnerabilities. When a new vulnerability is identified, it is recorded in the CVE database with a unique identifier in the format "CVE-YYYY-NNNNN", where "YYYY" represents the year of disclosure. This system, complemented by descriptive metadata and references, enables professionals in different fields—such as software development, system administration, and cybersecurity — to accurately and consistently reference each vulnerability. The CVE initiative is not standalone but part of a broader cybersecurity ecosystem. It is managed by the MITRE Corporation, with backing from the U.S. government. The integration of CVE identifiers into various cybersecurity tools and services aids in effective vulnerability management and remediation strategies. Additionally, the National Vulnerability Database (NVD) enhances the CVE system by providing extended metadata, including risk assessments and impact evaluations for each listed vulnerability. CVE's role is critical in the contemporary digital landscape. It provides a common language for diverse entities, from IT departments implementing system patches to software vendors resolving product bugs. Awareness and understanding of relevant CVEs are essential for ensuring that the software and systems in use are protected against known vulnerabilities, thereby promoting a more secure digital environment for individuals and organizations alike. Updated January 22, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cve Copy

Cyberbullying

Cyberbullying There are bullies and then there are cyberbullies. While bullying typically happens at school or work, cyberbullying takes place over cyberspace. This includes both Internet and cell phone communication. Like physical bullying, cyberbullying is aimed at younger people, such as children and teenagers. It may involve harassing, threatening, embarrassing, or humiliating young people online. Cyberbullying can take many forms. The following are just a few examples: Making fun of another user in an Internet chat room. Harassing a user over an instant messaging session. Posting derogatory messages on a user's Facebook or MySpace page. Circulating false rumors about someone on social networking websites. Publishing lewd comments about another person on a personal blog. Posting unflattering pictures of another user on the Web. Spamming another user with unwanted e-mail messages. Sending threatening or provocative e-mails. Repeatedly calling another person's cell phone. Sending unsolicited text messages to another user. Cyberbullying may seem humorous to some people, but it is a serious matter. Kids who are bullied online often feel hurt and rejected by their peers. This can lead to low self esteem and depression. Therefore, cyberbullying should not be tolerated and should be reported to authorities. NOTE: Technically, cyberbullying takes place between two young people. When adults are involved, it may be called cyber-harassment or cyberstalking. Updated September 15, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cyberbullying Copy

Cybercrime

Cybercrime Cybercrime is criminal activity done using computers and the Internet. This includes anything from downloading illegal music files to stealing millions of dollars from online bank accounts. Cybercrime also includes non-monetary offenses, such as creating and distributing viruses on other computers or posting confidential business information on the Internet. Perhaps the most prominent form of cybercrime is identity theft, in which criminals use the Internet to steal personal information from other users. Two of the most common ways this is done is through phishing and pharming. Both of these methods lure users to fake websites (that appear to be legitimate), where they are asked to enter personal information. This includes login information, such as usernames and passwords, phone numbers, addresses, credit card numbers, bank account numbers, and other information criminals can use to "steal" another person's identity. For this reason, it is smart to always check the URL or Web address of a site to make sure it is legitimate before entering your personal information. Because cybercrime covers such a broad scope of criminal activity, the examples above are only a few of the thousands of crimes that are considered cybercrimes. While computers and the Internet have made our lives easier in many ways, it is unfortunate that people also use these technologies to take advantage of others. Therefore, it is smart to protect yourself by using antivirus and spyware blocking software and being careful where you enter your personal information. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cybercrime Copy

Cyberspace

Cyberspace "Cyberspace" is the digital world created by computers and other network-connected devices. It encompasses everything from local data storage to global communication over the Internet. Whether you're viewing photos on your computer, sending an email, or browsing the web, you are operating within cyberspace. Origin of Cyberspace The term "Cyberspace" was popularized by science fiction author William Gibson in his 1984 novel Neuromancer. Gibson depicted cyberspace as: "a consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts... A graphical representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non-space of the mind, clusters and constellations of data." A visual depiction of cyberspace Though a novel concept at the time, cyberspace is now an integral part of everyday life. In fact, it is so ubiquitous that people rarely use the term anymore. Instead, people use terms like social media, e-commerce, and the metaverse to describe more specific aspects of cyberspace. Updated July 31, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cyberspace Copy

Cybersquatter

Cybersquatter In the early days of the United States, pioneers traveled west and claimed federal land as their own. These people were called "squatters," since they claimed rights to the land simply by occupying it. In the mid-1800s, during the California gold rush, squatters became especially prominent and settled land throughout the west coast. In the early 1990s, a new gold rush began, but this time the rush was for domain names, rather than gold. Many early Internet users saw the potential value of prominent domain names and began to register as many domains as they could. Over the course of a few years, nearly all common "dot coms" were registered. Many of these domain names were registered for investment purposes, rather than being used for legitimate websites. This practice soon became known as "cybersquatting." Cybersquatters may own anywhere from a single domain to a few thousand domain names. These "domainers," as they are also called, typically register domain names that contain popular words and phrases. High profile domain names may generate traffic through manual "type-ins" or may simply be attractive to potential buyers. Some domainers, called "typosquatters," register domain names that are similar to well-known websites, but contain typos. The goal of these domains is to generate traffic through mistyped URLs. Generally, cybersquatters profit from their domain names by one of two ways: 1) generating advertising clicks on parked pages (single-page websites), or 2) selling the domains at a significant premium to those interested in buying them. While some cybersquatters have made huge profits by selling high-interest domain names, others have been forced to give up domains to the rightful owners. The Anticybersquatting Consumer Protection Act (ACPA) was passed in 1999, which gives owners of trademarked or registered names legal rights to a related domain name. In general terms, the law states that users cannot register a domain name that is the same or similar to the name of a known entity. This prevents cybersquatters from extorting money from businesses or individuals by obtaining a specific domain name. If a dispute over a domain name arises, the two parties may bring the case through a legal proceeding. However, since this is a time-consuming process, ICANN has developed the Uniform Domain Name Dispute Resolution Policy (UDRP), which givens trademark owners a simple means to retrieve domain names from cybersquatters. While cybersquatters still exist and remain prominent today, the rightful owners of certain names now have an easier (and much less expensive) way of getting the domain names they deserve. Updated December 23, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/cybersquatter Copy

DAC

DAC Stands for "Digital-to-Analog Converter." A DAC is a hardware component that converts a digital signal into an analog signal. Computers and other digital devices manage their data digitally, so connecting one to an analog output device requires a DAC. The most common use for a DAC is to convert a digital audio signal from a computer or other digital device into an analog audio signal for output through speakers and headphones. Everything that processes digital audio for playback includes a DAC, including internal computer sound cards and external USB audio interfaces, as well as other devices like smartphones and televisions. It converts an incoming digital audio signal (which consists of a stream of 0s and 1s) into an analog signal that consists of an electrical charge, which can be recognized by speaker inputs and converted into sound waves. This signal is either sent to an amplifier to boost its power, or directly to a speaker or headphone connection. A Creative SoundBlaster USB audio interface includes a Digital-to-Analog Converter Multiple devices involved in audio playback can include a DAC, but only the device that converts the signal uses its DAC. For example, a laptop computer uses its DAC to convert audio when you play through analog headphones or its built-in speakers. If you connect a set of Bluetooth headphones, the laptop instead sends a fully digital signal over the wireless connection. The headphones include their own built-in DAC that converts the digital audio signal into sound waves through each driver. Digital audio signals can be transmitted from device to device without any quality loss, but converting a digital signal into an analog one comes with the potential for degradation. Not all DACs are built to the same standard, and differences in circuit design and component quality can affect how accurate the analog signal is. DACs using higher-quality components can produce a cleaner analog signal with less distortion, noise, and interference. NOTE: Video cards that connect to an analog interface like VGA also use a DAC. A video DAC converts digital image and video signals into an analog signal for display on an analog CRT monitor. Interfaces designed to connect to digital displays don't require a DAC since LCD, LED, and OLED monitors convert the digital signal directly into pixels on the screen. Updated November 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dac Copy

Daemon

Daemon A daemon (also known as a "background process") is a program that runs in the background and performs tasks without any input from the computer's user. Instead, they wait for specific triggering events or conditions before performing their functions. Unix and Unix-based operating systems like Linux and macOS use daemons for many critical system tasks. The name 'daemon' comes from Greek mythology, referring to an "inner or attendant spirit." Like the mythical spirit, a daemon process hides out of sight. It performs a helpful task on its own and without being asked. Most importantly, without the daemon, these tasks would not get done at all. Activity Monitor in macOS with the sysmond daemon selected and several others visible While viewing your computer's active processes, you can identify daemons by their names — by tradition, the process names of Unix daemons end with the letter 'd.' For example, httpd is the background service on a web server that waits for incoming requests, then provides webpages and other files. Other commonly-used daemons include crond, responsible for running scheduled cron jobs, and lpd, which sends print jobs to a network printer. Some daemons ignore this naming convention, including the full word 'daemon' in their names. NOTE: Background processes that run without user input in Windows are known as services, not daemons, but fill the same role. Updated May 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/daemon Copy

Dark Mode

Dark Mode Dark mode is a software option that makes the user interface darker. It changes light backgrounds to a dark color and changes text from dark to light. The result is a pseudo-inverted interface that isn't exactly the opposite of the "light mode," but has mostly dark colors. Dark mode, also "night mode," has been popular with developers for many years. Since developers stare at source code several hours a day, the dark background produces less eye strain. Recently, dark mode has become a popular option for all end users since it is now a standard option for both Windows and macOS. The first version of Windows to support dark mode was Windows 10 October 2018 Update. The first version of macOS to support dark mode was macOS 10.14 Mojave, released in September, 2018. There is no standard dark mode palette, so each developer can choose how to implement it. Some dark modes have dark grey themes, while others are almost black. Other versions have different hues, such as slate or dark blue. The option to switch to dark mode is typically located in the "General" or "Interface" section of a program's settings. It is also an option on some websites, including TechTerms.com. NOTE: To activate dark mode on TechTerms.com, click the sun/moon icon in the navigation bar. Updated May 31, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dark_mode Copy

DAS

DAS Stands for "Direct Attached Storage." DAS refers to any storage device connected directly to a computer. Examples include HDDs, SSDs, and optical drives. While DAS can refer to internal storage devices, it is most often describes external devices, such as an external hard drive. The term "DAS" was created to differentiate between network-attached storage (NAS) and direct-attached storage. Before NAS and storage area networks (SANs) were available, direct-attached storage was the only option. While DAS is still the most common type of storage used in personal computing, network and server administrators often need to choose between DAS and NAS for a storage solution. The primary benefit of DAS vs NAS is the simplicity of the setup. You can simply connect a device to a computer and, as long as the necessary drivers are available, it will show up as an additional storage device. There is no need to configure network settings or set up permissions for individual computers. The main drawback of DAS is that a direct-attached device is only accessible via the computer to which it is attached. Therefore, a computer must be configured as a file server in order for other systems to access any connected DAS devices. Updated March 26, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/das Copy

Dashboard

Dashboard Dashboard is a user-interface feature Apple introduced with the release of Mac OS X 10.4 Tiger. It allows access to all kinds of "widgets" that show the time, weather, stock prices, phone numbers, and other useful data. With the Tiger operating system, Apple included widgets that do all these things, plus a calculator, language translator, dictionary, address book, calendar, unit converter, and iTunes controller. Besides the bundled widgets, there are also hundreds of other widgets available from third parties that allow users to play games, check traffic conditions, and view sports scores, just to name a few. The dashboard of widgets is accessed by clicking the Dashboard application icon, or by simply pressing a keyboard shortcut (F12 by default). Clicking a plus "+" icon in the lower-left hand corner of the screen provides the user with a list of all installed widgets. Clicking the widgets or dragging them onto the desktop makes them active. They can be individually closed by clicking the close box, just like other open windows. Pressing the keyboard shortcut (F12) makes them instantly disappear, removing them from view until the user needs them again. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dashboard Copy

Data

Data In the most general sense, data refers to a collection of individual values that, when processed, convey information. Computer data is information that is stored and processed digitally on a computer. Data on a computer can take many forms, including text, images, audio, or video. It may be loaded into memory and processed by the computer's CPU, then stored as files in folders on a hard drive or solid-state drive. Data on a computer is stored as binary data, where every file consists of a series of 1s and 0s called bits. 8 bits make up one byte, which is the basic unit of data storage. Data is encoded using different methods for different data types. For example, text data is stored using a character encoding method like ASCII or Unicode where each letter is represented by a single byte or series of bytes. Image data, meanwhile, is stored by assigning each pixel in a grid a color value using as few as 8 or as many as 32 bits per pixel. Computers can store data on many different kinds of storage devices. Hard drives and solid-state drives are the most common ways to store data on a computer. Flash drives can easily transfer data from one computer to another. Computers with optical drives can burn data to (re)writable CDs and DVDs. Network-connected computers can transfer data from one computer to another over a local network or the Internet. Reading or transferring digital data does not cause any deterioration or quality loss over time. Updated December 13, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/data Copy

Data Center

Data Center A data center is a central location for storing and processing data. It may include anywhere from a few computers to several thousand. Most data centers contain servers, such as web, mail, and file servers. However, some are designed specifically for data storage and others for cluster computing. Data centers may serve one or more of the following functions: Web hosting Mail hosting Cloud computing Data storage Data backup CDN edge caching Edge computing Bitcoin mining A typical data center contains rows of rack-mounted computers, which are space-efficient and easy to access. They also include networking equipment, such as switches, routers, and Ethernet cables. Most have a high-speed Internet connection or connect directly to the Internet backbone. The equipment each data center contains depends on its purpose. For example, a web hosting site may have thousands of web servers with various configurations. A data storage facility may include a large number of storage devices and only a few computers. An edge computing center requires machines with high-end processors but doesn't need a lot of storage. Because computer technology is constantly evolving, data centers must be scalable, or easy to expand and upgrade over time. They must be built efficiently, with high-bandwidth connections, to avoid network bottlenecks. Finally, appropriate security measures, such as firewalls, must be implemented to prevent external attacks. Many data centers have sophisticated monitoring tools to catch bottlenecks and hacking attempts so network administrators can address them before problems occur. Updated April 7, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/data_center Copy

Data Management

Data Management Data management is a general term that covers a broad range of data applications. It may refer to basic data management concepts or to specific technologies. Some notable applications include 1) data design, 2) data storage, and 3) data security. 1. Data design, or data architecture, refers to the way data is structured. For example, when creating a file format for an application, the developer must decide how to organize the data in the file. For some applications, it may make sense to store data in a text format, while other programs may benefit from a binary file format. Regardless of what format the developer uses, the data must be organized within the file in a structure that can be recognized by the associated program. 2. Data storage refers to the many different ways of storing data. This includes hard drives, flash memory, optical media, and temporary RAM storage. When selecting an appropriate data storage medium, concepts such as data access and data integrity are important to consider. For example, data that is accessed and modified on a regular basis should be stored on a hard drive or flash media. This is because these types of media provide quick access and allow the data to be moved or changed. Archived data, on the other hand, may be stored on optical media, such as CDs and DVDs, since the data does not need to be changed. Optical discs also maintain data integrity longer than hard drives, which makes them a good choice for archival purposes. 3. Data security involves protecting computer data. Many individuals and businesses store valuable data on computer systems. If you've ever felt like your life is stored on your computer, you understand how important data can be. Therefore, it is wise to take steps to protect the privacy and integrity of your data. Some steps include installing a firewall to prevent unauthorized access to your computer and encrypting personal data that is submitted online or shared with other users. It is also important to backup your data so that you will be able to recover your files in case your primary storage device fails. Updated November 2, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/data_management Copy

Data Mining

Data Mining Data mining is the process of analyzing large amounts of data in order to discover patterns and other information. It is typically performed on databases, which store data in a structured format. By "mining" large amounts of data, hidden information can be discovered and used for other purposes. Data Mining Examples A credit card company might use data mining to learn more about their members' buying habits. By analyzing purchases from cardholders across the United States, the company may discover shopping habits for different demographics, such as age, race, and location. This information could be useful in offering individuals specific promotions. The same data may also reveal shopping patterns in different regions of the country. This information could be valuable to companies looking to advertise or start businesses in specific states. Online services, such as Google and Facebook, mine enormous amounts of data to provide targeted content and advertisements to their users. Google, for example, might analyze search queries to discover popular searches for certain areas and move those to the top of the autocomplete list (the suggestions that appear as you type). By mining user activity data, Facebook might discover popular topics among different age groups and provide targeted ads based on this information. While data mining is commonly used for marketing purposes, it has many other uses as well. For instance, healthcare companies may use data mining to discover links between certain genes and diseases. Weather companies can mine data to discover weather patterns that may help predict future meteorologic events. Traffic management institutions can mine automotive data to forecast future traffic levels and create appropriate plans for highways and streets. Data Mining Requirements Data mining requires two things — lots of data and lots of computing power. The more organized the data, the easier it is to mine it for useful information. Therefore it is important for any organization that wants to engage in data mining to be proactive in selecting what data to log and how to store it. When it comes to mining the data, supercomputers and computing clusters may be used to process petabytes of data. Updated February 11, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/data_mining Copy

Data Science

Data Science Data science is the study of data. It involves developing methods of recording, storing, and analyzing data to effectively extract useful information. The goal of data science is to gain insights and knowledge from any type of data — both structured and unstructured. Data science is related to computer science, but is a separate field. Computer science involves creating programs and algorithms to record and process data, while data science covers any type of data analysis, which may or may not use computers. Data science is more closely related to the mathematics field of Statistics, which includes the collection, organization, analysis, and presentation of data. Because of the large amounts of data modern companies and organizations maintain, data science has become an integral part of IT. For example, a company that has petabytes of user data may use data science to develop effective ways to store, manage, and analyze the data. The company may use the scientific method to run tests and extract results that can provide meaningful insights about their users. Data Science vs Data Mining Data science is often confused with data mining. However, data mining is a subset of data science. It involves analyzing large amounts of data (such as big data) in order to discover patterns and other useful information. Data science covers the entire scope of data collection and processing. Updated August 17, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/data_science Copy

Data Storage

Data Storage Data storage refers to various methods of saving digital data and preserving it for future use, even when the computer is powered down. All computers include at least one form of long-term data storage device, like a hard drive or solid-state flash drive. They can also store data on external storage devices or other computers called "file servers." Computer data is stored digitally as a series of bits — 1s and 0s representing an On or Off state. Eight bits make up a byte, the basic measurement unit of data storage. Data is encoded (using different methods for different data types) into individual objects called files. Each file is identified using a filename and given an extension that identifies the file's type. Files on a disk are organized into folders using a file system. Most operating systems create special folders to organize files by source or type, but you can also manage files as you like by creating your own folders. Computers can store data on a wide variety of storage devices. Solid-state flash memory stores data as a series of electrical charges in memory cells. Hard disk drives store it by magnetizing material on spinning platters. Data can also be stored on optical discs, encoded as a series of pits pressed into the underside of a CD, DVD, or Blu-ray disc. Some computers can write data to special removable magnetic tape cartridges, which are then placed into storage for long-term data archival. You can also store data on devices other than the computer that created it. USB flash drives and external hard drives are good ways to create copies of files to transfer to another computer. Network-attached storage devices can store files in a way that makes them accessible to multiple computers on a network. Many people also use cloud storage services to keep a copy of their files on a server in a remote data center, allowing them to access their data from any Internet-connected computer. Updated January 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/data_storage Copy

Data Transfer Rate

Data Transfer Rate A data transfer rate measures how fast data moves from one location to another. It can refer to the movement of data between disks, between other components like a CPU and RAM, or between two devices connected over a network. The maximum data transfer rate that a connection supports is known as its bandwidth, and the actual observed rate is called its throughput. How a data transfer rate is measured often depends on the context. Telecommunication data transfer rates are measured in bits per second (bps). For example, a DSL internet connection may transfer data at 20 Megabits per second (Mbps), while a high-speed ethernet connection can go up to 10 Gigabits per second (Gbps). A file transfer in Windows measured using megabytes per second (MB/s) File transfer rates are instead often measured using bytes per second (Bps). For example, when you copy a file from a network drive to your computer, the Windows file transfer dialog box will show a number like 5 MB/s. Since a single byte is eight bits, you can convert between bits per second and bytes per second by multiplying by 8 (to go from bytes to bits) or dividing by 8 (for bits to bytes). Data transfer rates often vary during a data transfer. Many factors, like congestion, interference, or distance, can affect a data transfer rate. For example, if you are not the only person copying files from another computer over the network, the data transfer rate you see will be lower than if you had the entire network to yourself. Updated July 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/data_transfer_rate Copy

Data Type

Data Type A data type is a type of data. Of course, that is rather circular definition, and also not very helpful. Therefore, a better definition of a data type is a data storage format that can contain a specific type or range of values. When computer programs store data in variables, each variable must be assigned a specific data type. Some common data types include integers, floating point numbers, characters, strings, and arrays. They may also be more specific types, such as dates, timestamps, boolean values, and varchar (variable character) formats. Some programming languages require the programmer to define the data type of a variable before assigning it a value. Other languages can automatically assign a variable's data type when the initial data is entered into the variable. For example, if the variable "var1" is created with the value "1.25," the variable would be created as a floating point data type. If the variable is set to "Hello world!," the variable would be assigned a string data type. Most programming languages allow each variable to store a single data type. Therefore, if the variable's data type has already been set to an integer, assigning string data to the variable may cause the data to be converted to an integer format. Data types are also used by database applications. The fields within a database often require a specific type of data to be input. For example, a company's record for an employee may use a string data type for the employee's first and last name. The employee's date of hire would be stored in a date format, while his or her salary may be stored as an integer. By keeping the data types uniform across multiple records, database applications can easily search, sort, and compare fields in different records. Updated October 17, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/datatype Copy

Database

Database A database is a structured, organized collection of data stored on a computer system. Databases typically store information in multiple linked tables that keep relevant data for each record in dedicated fields. Database management system (DBMS) software maintains the relationships between tables, adds and updates records, and displays data in response to queries. Most modern databases are relational databases, which link multiple tables together by creating relationships between primary key fields. For example, a business may keep a database that tracks customer information (like their name and email address) in one table, orders in another, and products for sale in a third table. Each record in these tables has a primary key field containing a unique ID that records in other tables can refer to; every order links to the CustomerID for the customer that placed the order and to the ProductID for each product purchased. Early databases were known as "flat file" databases. These databases kept everything in a single table, like in a spreadsheet. Looking up information in a database is called a query. Most databases use the structured query language (SQL) standard for queries, which lets you access and update records across multiple tables in a database. For example, a query to a database could display all customers in a particular ZIP code, all orders placed within the past week, or which customers have purchased a specified product more than once. An SQL query displaying records that meet the search criteria Databases are used for countless purposes in business, education, healthcare, and government. E-commerce, social media, news sites, and other dynamic websites use databases as a backend to store their content, generating pages containing content from their databases on demand. Finally, while early databases could only handle plain text, modern databases can store other data types like pictures, audio, and video. Updated April 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/database Copy

Datagram

Datagram Datagram is a combination of the words data and telegram. Therefore, it is a message containing data that is sent from location to another. A datagram is similar to a packet, but does not require confirmation that it has been received. This makes datagrams ideal for streaming services, where the constant flow of data is more important than 100% accuracy. Datagrams are also called "IP datagrams" since they are used by the Internet protocol (IP). This protocol defines how information is sent between systems over the Internet. For example, each device connected to the Internet must have an IP address, which serves as a unique identifier. Whenever data is transmitted via the Internet protocol, it is broken up into packets or datagrams, which each contain a header plus the actual data transmitted. A datagram header defines the source and destination of the data as well as other information, such as the total length (or size) of the datagram, time to live (TTL), and the specific protocol used to transfer the data. Generally, datagrams are sent via the UDP protocol, which is used for media streaming and other services that do not require confirmation that the data has been received. Packets, on the other hand, are typically sent via TCP, which guarantees all the data sent has been received. Updated September 22, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/datagram Copy

DAW

DAW Stands for "Digital Audio Workstation." A DAW is a digital system designed for recording and editing digital audio. It may refer to audio hardware, audio software, or both. Early DAWs, such as those developed in the 1970s and 1990s, were hardware units that included a mixing console, data storage device, and an analog to digital converter (ADC). They could be used to record, edit, and play back digital audio. These devices, called "integrated DAWs," are still used today, but they have largely been replaced by computer systems with digital audio software. Today, a computer system is the central user interface of most DAWs. Most professional recording studios include one or more large mixing boards connected to a desktop computer. Home studios and portable studios may simply include a laptop with audio software and a recording interface. Since computers have replaced most integrated DAWs, audio editing and post-production is now performed primarily with software rather than hardware. Several audio production programs, commonly called DAW software, are available for both Macintosh and Windows systems. Some common crossplatform titles include Avid Pro Tools, Steinberg Cubase, and Abelton Live. Other platform-specific DAW programs include Cakewalk SONAR for Windows and MOTU Digital Performer for Mac OS X. Updated July 16, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/daw Copy

DBMS

DBMS Stands for "Database Management System." A DBMS is a software program that creates and runs a database. It creates a framework that can store, organize, retrieve, and manipulate the contents of a database in a structured manner. It also provides an interface for users and applications to access a database's contents using a standardized query language like SQL. DBMS software also provides tools for database administrators to manage their databases. First, it provides a way for administrators to design a database by creating schemas, defining tables, and creating relationships. It helps administrators configure the database's user accounts, including permissions levels for each one. Administrators can specify which users can access data and which can add or modify database records; they may even choose to keep some data private and only accessible by specific accounts. A DBMS also helps administrators automate database backups and then restore from those backups when necessary. It can even monitor database performance, like how long each query takes. MySQL Workbench is a GUI frontend for the MySQL DBMS that helps administrators manage a database Many types of DBMS are available, with some popular options including MySQL, Microsoft Access, Oracle Database, and MongoDB. Even though most databases use one of several standard query languages, with SQL being the most common, each DBMS has a unique feature set and implements those features using custom query commands. Since databases often need to communicate with each other, most database management systems include an Open Database Connectivity (ODBC) driver that converts proprietary syntax into a dialect that other databases can understand. Updated August 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dbms Copy

DCIM

DCIM Short for "Digital Camera Images." DCIM is the standard name of the root folder digital cameras use to store digital photos. Therefore, when you connect a digital camera to your computer, the disk that appears on your desktop (Mac) or in "My Computer" (Windows) will most likely have a folder named "DCIM" inside it. The DCIM directory is part of the DCF (Design Rule for Camera File System) developed by JEITA (Japan Electronics and Information Technology Industries Association). This file system was originally established in December 1998 and has since become the standard file system used by all digital cameras. Therefore, whenever you connect a digital camera or smartphone to your PC, your computer will look for the DCIM folder. Similarly, if you insert a memory card into a card reader connected to your computer, it will scan the memory card for a folder named "DCIM." The DCIM folder provides a standard way for computers to recognize and import images from connected devices. For example, if your computer finds a DCIM folder on a memory card, it may automatically open your default image organization program or image importing utility. This allows all types of computers to find and import images from cameras made by different manufacturers. While most computers include software that can automatically import all images from the DCIM folder, you can also manually browse through the images stored within the DCIM directory. However, you may have to browse through multiple subdirectories to find all the images. Additionally, your camera may save thumbnail images along with the full size images. By using a program or utility that imports digital photos, you can make sure to import all the full size images from the DCIM folder to your computer correctly. Updated December 7, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dcim Copy

DDL

DDL Stands for "Data Definition Language." A DDL is a language used to define data structures and modify data. For example, DDL commands can be used to add, remove, or modify tables within in a database. DDLs used in database applications are considered a subset of SQL, the Structured Query Language. However, a DDL may also define other types of data, such as XML. A Data Definition Language has a pre-defined syntax for describing data. For example, to build a new table using SQL syntax, the CREATE command is used, followed by parameters for the table name and column definitions. The DDL can also define the name of each column and the associated data type. Once a table is created, it can be modified using the ALTER command. If the table is no longer needed, the DROP command can be used to delete the table. Since DDL is a subset of SQL, it does not include all the possible SQL commands. For example, commands such as SELECT and INSERT are considered part of the Data Manipulation Language (DML), while access commands such as CONNECT and EXECUTE are part of the Data Control Language (DCL). The DDL, DML, and DCL languages include most of the commands supported by SQL. Updated February 12, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ddl Copy

DDR

DDR Stands for "Double Data Rate." DDR is a version of SDRAM, a type of computer memory. Unlike SDR (single data rate) SDRAM, which can perform a read or write action once per clock cycle, DDR can perform two—one on the rising edge of the electrical signal and again on the falling edge. DDR also operates at a lower voltage than SDR (2.6v compared to 3.3v), making it more power-efficient. DDR RAM is available in both DIMM modules for desktop computers, and SO-DIMM modules for laptops. DDR DIMM modules have more pins than SDR DIMM modules, as well as a unique notch position, preventing a DDR DIMM from being inserted into a SDR DIMM slot and vice versa. A Corsair PC-3200 DDR DIMM module DDR RAM modules are graded based on the amount of data they can transfer per second, which is based on its clock speed. For example, a PC-1600 module operates at a clock speed of 100 MHz. It transfers 64 bits, or 8 bytes, of data twice per clock cycle for a total transfer speed of 1,600 MB/s (100 MHz x 2 transfers per cycle x 8 bytes). A faster PC-3200 module instead operates at a clock speed of 200 MHz, for a total transfer speed of 3,200 MB/s (200 MHz x 2 transfers per cycle x 8 bytes). NOTE: DDR SDRAM was introduced in 2000, and succeeded by DDR2 SDRAM in 2003. DDR is also retroactively known as DDR1 when being compared to later generations. Updated September 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ddr Copy

DDR2

DDR2 Stands for "Double Data Rate 2." DDR2 RAM is an improved version of DDR memory that is faster and more efficient. Like standard DDR memory, DDR2 memory can send data on both the rising and falling edges of the processor's clock cycles. This nearly doubles the amount of work the RAM can do in a given amount of time. DDR and DDR2 are also both types of SDRAM, which allows them to run faster than conventional memory. While DDR and DDR2 have many similarities, DDR2 RAM uses a different design than DDR memory. The improved design allows DDR2 RAM to run faster than standard DDR memory. The modified design also gives the RAM more bandwidth, which means more data can be passed through the RAM chip at one time. This increases the efficiency of the memory. Since DDR2 runs more efficiently than standard DDR memory, it actually uses less power than DDR memory, even though it runs faster. The only downside of DDR2 memory is that it is not compatible with standard DDR slots. So make sure your computer supports DDR2 RAM before upgrading your memory. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ddr2 Copy

DDR3

DDR3 Stands for "Double Data Rate Type 3." DDR3 is a type of SDRAM that is used for system memory. It is available in both DIMM and SO-DIMM form factors. DDR3 RAM is similar to DDR2 RAM, but uses roughly 30% less power and can transfer data twice as fast. While DDR2 memory can transfer data at up to 3200 MBps (megabytes per second), DDR3 memory supports maximum data transfer rates of 6400 MBps. This means computers with DDR3 memory can transfer data to and from the CPU much faster than systems with DDR2 RAM. The faster memory speed prevents bottlenecks, especially when processing large amounts of data. Therefore, if two computers have the same processor clock speed, but different types of memory, the computer with DDR3 memory may perform faster than the computer with DDR2 memory. DDR3 memory modules look similar to DDR and DDR2 chips, but the gap that separates the two sets of pins on the bottom of each module is in a different location. This prevents the RAM chip from being installed in a slot that does not support DDR3 RAM. Therefore, when upgrading your computer's memory, make sure you get the appropriate type of memory for your computer. Updated April 6, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ddr3 Copy

DDR4

DDR4 Stands for "Double Data Rate 4." DDR4 is the fourth generation of DDR RAM, a type of memory commonly used in desktop and laptop computers. It was introduced in 2014, though it did not gain widespread adoption until 2016. DDR4 is designed to replace DDR3, the previous DDR standard. Advantages include faster data transfer rates and larger capacities, thanks to greater memory density and more memory banks (16 rather than 8). DDR4 also operates at a lower voltage (1.2V compared to 1.5V), so it is more power-efficient. Below are some notable DDR4 specifications: 64 GB maximum capacity per memory module (common capacities are 16 GB and 32 GB) 16 internal memory banks 1600 to 4400 MHz clock speed 17 to 35.2 GB/s data transfer rate 1.2 volts of electrical power 288 pins on a regular DIMM, 260 pins on a SO-DIMM DDR4 memory modules come in two primary form factors — DIMMs and SO-DIMMs. Desktop towers often contain DIMMs, while laptops and all-in-one desktop computers typically use SO-DIMMs. DDR4 DIMMs are the first to have a curved bottom edge, which makes it easier to insert them into and remove them from RAM slots on a motherboard. This and the unique position of the notch between the pins make it impossible to insert a DDR4 chip into an incompatible slot. Faster speeds and increased memory bandwidth allow DDR4 SDRAM to keep up with modern processors, including multi-core CPUs. This prevents the memory from being a bottleneck as processing speeds and bus speeds increase. NOTE: SDRAM must be matched with the specific requirements of a computer. When upgrading your memory, make sure you select the type (DDR2, DDR3, DDR4, etc) and speed (1600, 2400, 3200, etc) that is compatible with your computer. Updated January 17, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ddr4 Copy

DDR5

DDR5 Stands for "Double Data Rate 5." DDR5 is the fifth generation of DDR RAM, a type of memory used by desktop, laptop, and server computers. It replaces the previous standard, DDR4, and offers faster data transfer rates and better energy efficiency. It was first available in 2020. DDR5 memory modules are available in several form factors. Desktop computers use DIMM modules, while laptop computers use smaller SO-DIMMs instead. DDR5 DIMMs are similar in size and shape to DDR4 DIMMs, using 288 contact pins and featuring the same curved bottom edge. However, DDR5 DIMMs feature a relocated keying notch to prevent you from inserting the RAM module into an incompatible slot. DDR5 SO-DIMMs use a different design than DDR4, featuring 262 pins instead of 260. A Kingston high-performance DDR5 DIMM, with a heatsink covering the individual memory chips Some notable DDR5 specifications include: A maximum capacity of 512 GB per DIMM, although most modules are only 16-32 GB each. 32 internal memory banks, double the number from DDR4. Clock speeds between 1,600 and 3,200 MHz. Data transfer rates between 3,200 and 8,400 Mbps. A reduced power draw of 1.1 volts of electrical power, compared to 1.2 volts for DDR4. Two 32-bit wide data channels instead of a single 64-bit wide data channel. Using two smaller data channels allows DDR5 to transfer data more efficiently and with less latency. On-Die Error-Correcting Code (ODECC) on every module to detect and correct errors before data is sent to the CPU. NOTE: A computer's motherboard and chipset determines which type of SDRAM it supports. Each generation of DDR SDRAM uses a different pin and notch layout, preventing you from inserting the wrong type of memory. Make sure you know what type (DDR3, DDR4, DDR5) and speed (DDR5-4800, DDR5-6400, DDR5-7200) of memory you need before upgrading your RAM. Updated October 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ddr5 Copy

Deadlock

Deadlock A deadlock is a condition where a program cannot access a resource it needs to continue. When an active application hits a deadlock, it may "hang" or become unresponsive. Resources, such as saved or cached data, may be locked when accessed by a specific process within a program. Locking the data prevents other processes from overwriting the data prematurely. If a process or query needs to access locked data, but the process locking the data won't let it go, a deadlock may occur. For example, the following situation will cause a deadlock between two processes: Process 1 requests resource B from process 2. Resource B is locked while process 2 is running. Process 2 requires resource A from process 1 to finish running. Resource A is locked while process 1 is running. The result is that process 1 and process 2 are waiting for each other to finish. Since neither process can continue until the other one completes, a deadlock is created. Avoiding Deadlocks Developers can prevent deadlocks by avoiding locking conditions in their programming logic. For example, instead of having two processes rely on each other, the source code can be written so that each thread finishes before another thread needs its resources. By ensuring data is accessible when needed, programmers can protect their applications from hanging or crashing. NOTE: Deadlocks may also occur when two or more queries are run on a database. Transactional databases lock active records, preventing other queries from accessing them. If a process cannot access a locked record, a database deadlock may occur. Updated January 18, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/deadlock Copy

Debug

Debug Debugging is the task of finding and fixing bugs (or errors) in a software program. Bugs can range from small inconveniences (like ignoring user input in certain circumstances) or significant problems that can cause memory leaks or crashes. Several methods are available for software developers to debug a program, including using a debugger or analyzing crash reports. Software developers debug their software before releasing it to catch as many errors as possible before the application is available to the public. It's unlikely that a developer can find every bug the first time, so most developers have a process for getting bug feedback from users. A developer may release an early version of their software, known as a beta version, to a limited set of users that help identify errors. They can ask for bug reports directly from end users or include specialized code in their software that automatically sends the developer crash reports. After another round of debugging, the developer issues a patch. Integrated Development Environments (IDEs) include debuggers that help developers debug an application's source code. These debuggers run an application's source code in a few special ways — a debugger can run a program step-by-step, pausing between steps to make it easier to identify where errors appear; a debugger can modify a program's source code as it is running in order to make changes on the fly; or, it can also record a program's activity for later playback and analysis. Updated October 31, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/debug Copy

Debugger

Debugger A debugger is a software tool developers use to identify, analyze, and fix errors (or "bugs") in their code. It allows programmers to inspect a program's execution in real-time, making it easier to diagnose issues that might not be immediately apparent. Debuggers help track variable values, memory usage, and the execution flow, often highlighting the exact lines of code where errors occur. Most debuggers allow developers to set breakpoints, which pause execution at specific points in the program. The programmer can examine the program's state at each breakpoint and determine if everything looks correct. When isolating a specific issue, a programmer may use step-through execution, which runs the code line by line. Advanced debugging tools also provide insights into memory allocation, thread activity, and performance bottlenecks. Debugging an application with Apple Xcode Apple's Xcode (above) provides an integrated development environment (IDE), which includes a powerful debugger. Developers can set breakpoints, view variable values in real time, and use the console to inspect and modify an application's state. For example, if a macOS app crashes due to a segmentation fault, Xcode's debugger can display a stack trace to help the programmer pinpoint the issue. In some cases, it can show the exact function and line number where the error occurred. A capable debugger is an essential tool for modern software development. Even the best programmers rely on debugging tools to refine their code and ensure their applications run as expected. Updated January 31, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/debugger Copy

Decryption

Decryption Decryption is the process of converting encrypted data into recognizable information. It is the opposite of encryption, which takes readable data and makes it unrecognizable. Files and data transfers may be encrypted to prevent unauthorized access. If someone tries to view an encrypted document, it will appear as a random series of characters. If someone tries to "snoop" on an encrypted network connection, the data will not make any sense. So how is it possible to view encrypted data? The answer is decryption. How Decryption Works There are several ways to encrypt files, but most methods involve one or more "keys." When encrypting a file, the key may be a single password. Network encryption often involves a public key and a private key shared between each end of the data transfer. Decryption works by applying the opposite conversion algorithm used to encrypt the data. The same key is required to return the encrypted data to its original state. For example, if the password "ABC123" is used to encrypt a file, the same password is needed to decrypt the file. Secure network transfers, including Internet connections, handle encryption and decryption in the background. Protocols such as HTTPS and Secure SMTP encrypt and decrypt data on the fly. These protocols automatically generate a secure key for each encrypted network transfer and do not require a password. NOTE: Encrypting data is a smart way to protect private information from prying eyes. But make sure to use a password you will remember when encrypting files. If you cannot remember your password, the data may be unrecoverable. Updated May 8, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/decryption Copy

Default

Default Default is an adjective that describes a standard setting or configuration. While it is not specific to computers, it is commonly used in IT terminology. In computing, "default" may describe several things, including hardware, software, and network configurations. A default hardware configuration (or "default config") is a prebuilt system with standard specifications. This includes standard components such as the CPU, RAM, storage device, and video card. Retail stores typically carry default hardware configurations, while online stores are more likely to offer customization options. In software, "default" describes preset settings. For example, when you install a program for the first time, it will load the default preferences. If you want to change these settings, you can open the "Preferences" window and modify the default options to suit your needs. Your web browser, for instance, might open new pages in tabs by default. If you prefer windows instead of tabs, you can change the default selection in the Preferences window. Some applications offer a "Restore Default Settings" command that allows you to revert to the original settings at any time. In networking, "default" is often used in reference to protocols. For example, a web server may serve webpages over HTTP by default, but it may also offer a secure HTTPS option. Common protocols, such as FTP and SMTP are typically assigned default port numbers, but they can be altered for security reasons. NOTE: While default is most often used as an adjective, it can also be used as a verb. For example, when a certain option is not available (such as a specific font), a program may "default" to the preset setting. Updated July 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/default Copy

Default Program

Default Program A default program is an application that opens a file when you double-click it. For example, if you double-click a .TXT file in Windows and it automatically opens in Notepad, then Notepad is the default program for files with a ".txt" extension. If the file opens in Microsoft Word, then Microsoft Word is the default program. Default programs are necessary since many file types can be opened by more than one program. For example, your computer may have over a dozen applications that can open .JPG files. Therefore, the operating system needs to know which program to open by default when you double-click a JPEG image file. Both Windows and Macintosh computers store a list of default programs for each file extension. These relationships between programs and file extensions are also called "file associations." Both the Windows and Macintosh operating systems allow you to change file associations if you don't like the default program that is associated with a certain file type. For example, if you prefer to play MP3 files in iTunes rather than Windows Media Player, you can change the ".mp3" file association to iTunes. This will set iTunes as the default program for all .MP3 files. Windows 7 has a built-in utility for configuring file associations called "Default Programs." This tool allows you to assign specific programs to one or more file extensions using a simple graphical interface. It also displays what file extensions are associated with each installed application. For more information on using the Windows 7 Default Programs tool, view the FileInfo.com Default Programs Help Article. While Mac OS X does not include a Default Programs tool, you can simply right-click a file and choose "Open With…" to select a different program to open it. If you want to change the default program for a specific file, select the file and choose File → Get Info. Then select a different program in the "Open with:" section of the window. If you want to change the default program for all files with the same extension, press the "Change All…" button. Updated February 4, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/default_program Copy

Defragment

Defragment Defragmenting is a disk maintenance process that reorganizes the data on a hard disk to improve its performance. It identifies which files include fragmented data spread around the disk, then moves data around to improve read and write speed. Defragmenting hard disk drives is a common maintenance task, but it is unnecessary on SSDs and other flash drives. When you save a new file to a hard disk, it breaks that data into small chunks on a disk platter called sectors. If the hard disk is relatively empty, it can easily allocate enough contiguous sectors to store the whole file in one spot. If you modify that file later in a way that takes up more space, and there aren't enough open sectors adjacent to it on the disk platter, the drive saves that data wherever there is room. Over time, files are broken up, or "fragmented," and scattered around all over the disk. Since the disk's read and write heads must move around more to read or write a file, this slows down disk access speeds and increases wear and tear on the disk drive. Defragment a disk moves data around so that each file is saved to contiguous sectors A defragmentation utility cleans up a drive in several steps. First, it scans the entire disk to create a map of sectors and tracks which file each sector is associated with. It then rearranges data to reunite fragmented files back into contiguous blocks. If a disk is very fragmented, defragmenting can recover disk space by consolidating data from partially-used sectors. Defragmentation can take a lot of time on large, heavily fragmented disks, so you should regularly defragment your hard drives. Windows and Linux both come with built-in defragmentation utilities that often run automatically. While macOS does not have its own defragmentation utility, it does include other features that minimize file fragmentation. NOTE: While defragmenting a hard disk drive is considered regular maintenance, defragmenting a solid-state drive is unnecessary and often harmful. SSDs read and write data at the same speed regardless of whether a file is fragmented or not, and attempting to defragment an SSD may instead reduce its lifespan through excessive read and write cycles. Updated September 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/defragment Copy

Degauss

Degauss Degaussing is the process of reducing a magnetic field. It can be used to reset the magnetic field of a CRT monitor or to destroy the data on a magnetic storage device. A CRT monitor displays images by firing electrons through a cathode ray tube (CRT) onto the screen. The electron beams are focused by a plate near the screen called a shadow mask. If this plate becomes unevenly magnetized by surrounding objects or simply the earth's magnetic field, it can cause discoloration on the screen. Therefore, CRT monitors often include a "Degauss" command that resets the magnetic field when run. Since it is impossible to eliminate the magnetic charge inside the monitor, the degaussing process realigns or randomizes the magnetic field, which provides consistent colors across the screen. Degaussing may also be used to destroy the data on a magnetic storage device, such as a hard drive or tape drive. A hard drive degausser, for example, includes a chamber where you can insert a hard drive. When you run the degausser, it uses a process called capacitative discharge to decrease the drive's magnetic field, making any data on the drive unreadable. The cycle time, which is the time it takes for the data to be erased, varies between degaussers, but is usually between ten seconds and one minute. Companies and government organizations may use degaussers to eliminate the data on storage devices before reprovisioning or discarding them. Updated August 15, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/degauss Copy

Delete

Delete To delete something is to remove or erase it. You can delete text from a document, data from a spreadsheet, or even entire files and folders from a disk. Computers provide several ways for you to delete content quickly so that you can keep working without much interruption. Computer keyboards have either one or two keys that issue a delete command. All keyboards have a Backspace key (labeled Delete on Mac keyboards) at the right end of the numbers row, and some have a smaller secondary Delete key too. When editing text, pressing the Backspace key deletes one text character behind (to the left of) the text cursor. Pressing the smaller Delete key deletes text to the right of the text cursor. If you have a block of text selected, pressing either key deletes the entire selection. The location of the Backspace and Delete keys on a Microsoft keyboard You can delete files and folders from a disk in several ways. You can click and drag a file or folder's icon from the file manager into the Recycle Bin (on Windows) or Trash (on macOS). You can select it, then choose the Delete / Move to Trash option from a menu, toolbar, or right-click menu. Finally, you can select a file or folder and press the Delete key (on Windows) or Command + Delete (on macOS). Moving a file or folder to the Recycle Bin or Trash does not delete it right away. The operating system keeps files there temporarily, allowing you to restore them if you need them again. They are not removed from the file system until you empty the Recycle Bin or Trash, and even then, traces of the files will remain on the disk until the disk overwrites those sectors with new files. Data recovery utilities can often undelete files if you act soon enough. Updated November 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/delete Copy

Delimiter

Delimiter A delimiter is a text character (or a sequence of characters) that separates elements in a text string or data stream. Some delimiters separate an object into multiple pieces of data, while other delimiters establish boundaries around data to provide structure. Delimiters can provide a reliable and predictable way to split text strings for computer programming, text parsing, and data analysis. Delimiters are utilized in many different contexts. Programming languages use several delimiter characters to structure and format code, like parentheses and brackets to enclose code blocks and semicolons to mark the end of a statement. CSV files use commas, tabs, or pipes to separate columns in a database record, and new line characters to separate rows from each other. Command line interfaces let you use delimiters to separate commands from modifying parameters. URL addresses include several kinds of delimiter, including slashes that indicate documents nested within folders, colons that separate a domain name from a port number, and question marks that separate a document's location from additional parameters provided to the web server. A CSV file using commas as a delimiter between fields If you can choose a delimiter for text parsing or data analysis, it's best to choose one that does not appear in the text itself. If that cannot be avoided, you need to add an escape character to every instance of that character in the text that you don't want to use as a delimiter. The following example uses a comma (,) as a delimiter between data fields in a CSV file; any comma that does not separate one data field from another should be preceded by a backslash, as seen in the record's address field. Name,Age,Address John Doe,30,"4500 2nd Ave, Unit 12" Updated November 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/delimiter Copy

Denary

Denary Denary, also known as "decimal" or "base 10," is the standard number system used around the world. It uses ten digits (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) to represent all numbers. Denary is often contrasted with binary, the standard number system used by computers and other electronic devices. The first two letters in denary ("de") are an abbreviated version of "dec" which is a Latin prefix meaning "ten." The Latin prefix "bi" means two. Therefore, the denary system contains ten digits, while the binary system only contains two (0 and 1). Both systems can be used to represent any integer. The table below shows how numbers are displayed in both denary and binary. DenaryBinary 11 5101 101010 50110010 1001100100 1,23410011010010 To calculate the value of a number in either denary or binary, you can multiply each digit by the appropriate multiplier and add them together to get the total. In denary, each digit from right to left is multiplied by 10 to the corresponding power, starting with 0. For example, 1,234 in denary can be calculated as follows: 4 x 1 (100) + 3 x 10 (101) + 2 x 100 (102) + 1 x 1,000 (103) = 4 + 30 + 200 + 1,000 = 1,234. In binary, each digit is multiplied by 2 to the corresponding power. For instance, the binary number 1010 can be calculated in denary as: 0 x 1 (20) + 1 x 2 (21) + 0 x 4 (22) + 1 x 8 (23) = 0 + 2 + 0 + 8 = 10. Another less common number system is hexadecimal, which uses the same ten digits as denary plus A, B, C, D, E, and F. Understanding hexadecimal numbers is important for web developers and computer science majors, but is not necessary for the average user. Simply knowledge of the denary number system is sufficient for most people. Updated January 27, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/denary Copy

Denial of Service

Denial of Service A denial of service attack is an effort to make one or more computer systems unavailable. It is typically targeted at web servers, but it can also be used on mail servers, name servers, and any other type of computer system. Denial of service (DoS) attacks may be initiated from a single machine, but they typically use many computers to carry out an attack. Since most servers have firewalls and other security software installed, it is easy to lock out individual systems. Therefore, distributed denial of service (DDoS) attacks are often used to coordinate multiple systems in a simultaneous attack. A distributed denial of service attack tells all coordinated systems to send a stream of requests to a specific server at the same time. These requests may be a simple ping or a more complex series of packets. If the server cannot respond to the large number of simultaneous requests, incoming requests will eventually become queued. This backlog of requests may result in a slow response time or a no response at all. When the server is unable to respond to legitimate requests, the denial of service attack has succeeded. DoS attacks are a common method hackers use to attack websites. Since flooding a server with requests does not require any authentication, even a highly secured server is vulnerable. However, a single system is typically not capable of carrying out a successful DoS attack. Therefore, a hacker may create a botnet to control multiple computers at once. A botnet can be used to carry out a DDoS attack, which is far more effective than an attack from a single computer. Denial of service attacks can be problematic, especially when they cause large websites to be unavailable during high-traffic times. Fortunately, security software has been developed to detect DoS attacks and limit their effectiveness. While many well-known websites, like Google, Twitter, and WordPress, have all been targets of denial of service attacks in the past, they have been able to update their security systems and prevent further service interruptions. Updated August 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/denial_of_service Copy

Deployment

Deployment Software deployment is the process of making software usable on one or more systems. Both single and multi-system deployment involve four primary steps: Download Install Activate Update 1. Download While some software is still distributed on flash drives or optical media (CDs and DVDs), most software today is downloaded. Therefore, deployment often begins with downloading a software installer. 2. Install The installation process is the primary deployment step, in which the application and corresponding files are installed on one or more systems. Installation options include where to install the software and what files to install, such as add-ons and other extras. 3. Activate Commercial software programs often require activation, either immediately or after a trial period. Without activation, the software may run for a limited time or with limited features. The activation process requires purchasing a license and entering the activation key. 4. Update The final step of deployment is updating the software to the latest version. Many applications include a "Check for updates" command to check if a newer version is available. Keeping software up-to-date reduces issues with bugs, security holes, and incompatibilities. Advanced Deployment Deploying software on a home PC is often as simple as downloading, installing, and activating the program. Deployment on corporate machines or across multiple systems on a network can be more complex. For example, a network administrator may need to configure a program identically on dozens of computers. To ensure consistency, the admin may use a command-line interface or a deployment script to automate the installation process. NOTE: Windows installers usually have an .EXE or .MSI extension. Active Directory and System Center Configuration Manager (SCCM) installations may require MSI files. Updated April 1, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/deployment Copy

Deprecated

Deprecated "Deprecated" is a software development term for features being phased out and replaced. A deprecated feature in a programming language or software application may still work, but it is no longer supported or improved. Most deprecated features are eventually removed entirely. The developers of a program or programming language typically announce a feature's deprecation months or years in advance to give software developers time to implement a replacement. Programming languages and software applications in active development will occasionally replace old functions and other features with entirely new replacements. For example, in PHP, the money_format() function that formats a number as a currency string was deprecated in favor of the more robust NumberFormatter class, which can format numbers in many ways including as currency. The old money_format() function continued to work after its deprecation was announced with PHP 7.4, giving PHP developers time to update their source code in advance of its removal entirely in PHP 8. Being deprecated does not always mark a feature for removal, particularly in a program or language that has been around for a long time. A deprecated function may stick around for backward compatibility, even if a better implementation is available or the original method causes system instability, to allow old software to continue to work without an overhaul. For example, JavaScript's escape() function was deprecated due to poor support for Unicode characters but is still available in most web browsers to allow old JavaScript code to run. Xcode displaying a warning that this project uses a deprecated function Software and programming language features may become deprecated for several reasons: A replacement feature is introduced that performs the same function with improved performance or reduced complexity, making the deprecated feature obsolete. Old features that are inconsistent with the current state of the program or language may be deprecated in favor of a redesigned version. These features may use outdated syntax or naming conventions that could confuse a new programmer, so they're deprecated and replaced with versions that align better with modern standards. A feature with a design flaw (like a security vulnerability or memory leak) may be deprecated to discourage use. Redundant features may be deprecated and removed to reduce the complexity of the system. Scheduled changes to a program or language will cause an existing feature to break, so it is marked as deprecated in advance to encourage developers to use an alternative that won't be affected. NOTE: The developers of a program or programming language announce feature deprecation in their documentation and release notes. Deprecated features in a programming language will also trigger warning messages in that language's integrated development environment (IDE) when software developers attempt to use them. Updated May 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/deprecated Copy

Design Pattern

Design Pattern Design patterns are reusable solutions for software development. They serve as templates that programmers can use when creating applications. They are not specific to individual programming languages, but instead are best practices or heuristics that can be applied in different programming environments. While design patterns are not language-dependent, they often include objects or classes. Therefore, they are typically associated with object-oriented programming. Individual patterns can be classified into three different categories: 1) creational patterns, 2) structural patterns, and 3) behavioral patterns. 1. Creational Patterns Creational design patterns describe ways to create objects using methods that are appropriate for different situations. For example, the "Singleton" pattern is used to create a basic class that will only have one instance. A common example is a global variable defined in the source code of a program. An "Object Pool" pattern is used to create a class with a "pool" of objects that can be retrieved as needed instead of being recreated. This is often used for caching purposes. 2. Structural Patterns Structural design patterns define the relationships between objects. For example, the "Private Class Data" pattern is used to limit access to a specific class. This can prevent unwanted modification of an object. The "Decorator" class, on the other hand, allows behaviors and states to be added to an object at runtime. This provides programmers with the flexibility to add as many classes to an object as needed. One example is an avatar in a video game that accumulates weapons, armor, and items throughout the game. The appropriately named "Decorator" class would provide a good framework for this process. 3. Behavioral Patterns Behavioral design patterns describe the behavior of objects, such as the way they communicate with each other. One example is the "Command" pattern, which describes objects that execute commands. The "Memento" pattern records the state of an object so it can be restored to its saved stated. These two patterns can be used together to perform Undo and Redo operations in a program. Summary Each of the three categories include several other design patterns that programmers can use. While the patterns provide helpful templates for software developers, they are sometimes criticized for being unnecessary or not specific enough for certain applications. Therefore, while design patterns are useful tools for programming, they do not need to be followed exactly to create a well-designed software program. Updated July 21, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/design_pattern Copy

Desk Checking

Desk Checking Desk checking is the process of manually reviewing the source code of a program. It involves reading through the functions within the code and manually testing them, often with multiple input values. Developers may desk check their code before releasing a software program to make sure the algorithms are functioning efficiently and correctly. The term "desk checking" refers to the manual approach of reviewing source code (sitting a desk), rather than running it through a debugger or another automated process. In some cases, a programmer may even use a pencil and paper to record the process and output of functions within a program. For example, the developer may track the value of one or more variables in a function from beginning to end. Manually going through the code line-by-line may help a programmer catch improper logic or inefficiencies that a software debugger would not. While desk checking is useful for uncovering logic errors and other issues within a program's source code, it is time-consuming and subject to human error. Therefore, an IDE or debugging tool is better suited for detecting small problems, such as syntax errors. It is also helpful to have more than one developer desk check a program to reduce the likelihood of overlooking errors in the source code. Updated October 20, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/desk_checking Copy

Desktop

Desktop The desktop is the primary user interface of a computer. When you boot up your computer, the desktop is displayed once the startup process is complete. It includes the desktop background (or wallpaper) and icons of files and folders you may have saved to the desktop. In Windows, the desktop includes a task bar, which is located at the bottom of the screen by default. In Mac OS X, the desktop includes a menu bar at the top of the screen and the Dock at the bottom. The desktop is visible on both Windows and Macintosh computers as long as an application or window is not filling up the entire screen. You can drag items to and from the desktop, just like a folder. Since the desktop is always present, items on the desktop can be accessed quickly, rather than requiring you to navigate through several directories. Therefore, it may be helpful to store commonly used files, folders, and application shortcuts on your desktop. Both the Windows and Macintosh operating systems allow you to customize the appearance of your desktop. In Windows 7, you can change the desktop background and select the default desktop icons within the "Personalization" control panel. In Mac OS X 10.6, you can change the desktop background using the "Desktop & Screen Saver" system preference. You can choose what items are shown on the desktop by selecting Finder → Preferences… and checking the items you want displayed. NOTE: The term "desktop" may also be short for desktop computer. Updated March 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/desktop Copy

Desktop Computer

Desktop Computer A desktop computer (or desktop PC) is a computer that is designed to stay in a single location. It may be a tower (also known as a system unit) or an all-in-one machine, such as an iMac. Unlike laptops and other portable devices, desktop computers cannot be powered from an internal battery and therefore must remain connected to a wall outlet. In the early age of computers, desktop computers were the only personal computers available. Since laptops and tablets did not exist, all home PCs were desktop computers. Still, the term "desktop computer" was used back then to differentiate between personal PCs and larger computers, such as mainframes and supercomputers. While desktop computers were the most popular type of personal computer for several decades, in recent years, laptop sales have surpassed those of desktop PCs. Because of the rise in mobile computing, this trend is likely to continue. However, desktop computers remain the most popular choice for business workstations and family computers. Updated March 10, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/desktop_computer Copy

Desktop Publishing

Desktop Publishing Desktop publishing is the process of using a computer to lay out text and graphics on a page. Desktop publishing software helps to create page layouts and designs for large-scale commercial print jobs, as well as small projects made on a home printer. It can also mean creating page designs for electronic distribution instead of printing. Any time you use a computer to create a printable document, it can be considered desktop publishing. However, the term is most commonly used to refer to creating advanced page layouts that use multiple text boxes, graphics, and advanced typesetting features (like creating font sets or inserting specific glyphs). For example, desktop publishing software allows you to easily create a newsletter layout with a title, headlines, multiple columns of text, images with captions, and other graphical elements. Adobe InDesign and Quark XPress are the two most common professional desktop publishing suites. These applications include powerful tools for importing text files and graphics, creating asset libraries, and designing sets of master pages. However, many word processors like Microsoft Word and Apple Pages also include page layout tools that you can use for basic desktop publishing tasks. Updated February 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/desktop_publishing Copy

Desktop Software

Desktop Software Desktop software refers to any application that runs locally on a desktop computer or laptop. It is often contrasted with mobile apps and cloud-based software. Desktop software is either bundled with an operating system (like Windows or macOS) or installed by the user. For several decades, software companies sold desktop software in physical boxes containing floppy disks, CDs, or DVDs. As Internet access became widespread in the early 2000s, developers began offering apps via Internet download. Today, downloading and installing desktop apps is far more common than installing apps from physical media. Examples of desktop apps include: Microsoft Word Adobe Photoshop Corel WordPerfect Apple Pages Apple Final Cut Pro The above apps are considered desktop software since you can download and install them on a desktop computer or laptop. Some programs, like Microsoft Word and Apple Pages, are also available as web applications, meaning you can run a cloud-based version of the app in your web browser. They are also available as mobile apps, which run on iOS and Android devices. When requesting support for one of these applications, it is helpful to specify which type of app you are using. Some web apps are the same as their desktop counterparts, but desktop applications generally have a more robust feature set. Mobile apps often have limited features based on the smaller screen size. Desktop programs are sometimes called "traditional applications" since they have been around since the 1980s. While many people now use mobile apps and cloud-based "SaaS" software on a daily basis, desktop software is still essential for most computer users. Updated February 19, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/desktop_software Copy

Developer

Developer In the IT world, a developer is a person who creates something with a computer. The term encompasses many types of content, such as software, websites, and written material. Therefore, developers are often have more specific titles. Some common examples include software developers, web developers, and content developers. Software Developer A software developer is someone who creates software programs. Software developers often have more specific titles, such as programmer, software analyst, or software engineer. A software programmer, for example, is someone who writes source code that can be run as a script or compiled into an executable program. A software analyst provides the requirements and specifications for a software program and may also assist in programming the software. A software engineer is the person who designs applications from the ground up and often oversees the development of software programs. Web Developer A web developer is a person who builds and maintains websites. Technically, he or she can perform more specialized web development tasks, such as coding HTML, writing CSS, and publishing content online. Though web design is a subcategory of web development, web designers may also be called web developers since there is a lot of overlap between the two professions. A person who maintains the content of a website and replies to visitor emails is called a webmaster. Software and web developers are often categorized as frontend or backend developers. Frontend refers to layout and user interface design, while backend refers to actual coding, including scripting and database queries. Someone who handles both frontend and backend development is a "full stack developer." Content Developer A content developer, also called a content producer, is someone who creates publishable content. This is often web-based content, though it may be material for a printed publication such as a magazine or technical manual as well. The content typically includes original text which may be published as a news story, blog, or other type of article. It may also contain images or video clips that are relevant to the article. Updated October 21, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/developer Copy

DevOps

DevOps DevOps combines the words "development" and "operations." It encompasses developers and IT operations personnel within an organization. The goal of DevOps integration is to improve collaboration between development and operations teams. An operations manager, for example, may request an update to a web application from the developers. In order for the update to be successful, the operations team must accurately describe all the necessary features of the update. The development team can then implement the update and test it internally before releasing it to the operations team for production. If a bug is found in a live website or software program, the operations team can send the information to the development team so the engineers can review and fix the error. Setting up a structured and streamlined workflow for requesting, implementing, and publishing updates can help companies release bug fixes quickly and efficiently. A DevOps process for software updates might include the following steps: Receiving and processing user feedback (Operations) Designing the update (Operations and Development) Coding and implementing the update (Development) Testing the update internally (Development) Publishing the update to production (Operations) Testing the live update (Operations and Development) The above steps are just one example of how a DevOps process might take place. There is no specific set of steps a company must follow. A small company, for instance, may have fewer steps and more overlap between divisions than a large corporation. The end goal of DevOps, regardless of company size, is to produce reliable software in the shortest time possible. Ways to improve DevOps workflow include: Creating identical testing and production environments Automating software tests, such as unit testing Designing software that is easily scalable Using version control to keep track of changes NOTE: A "DevOps Manager" is a relatively new position in the field of information technology. The role of a DevOps manager is to oversee both the development and operations teams, helping them communicate and work together effectively. Updated November 16, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/devops Copy

DFS

DFS Stands for "Distributed File System." A DFS manages files and folders across multiple computers. It serves the same purpose as a traditional file system, but is designed to provide file storage and controlled access to files over local and wide area networks. Even when files are stored on multiple computers, DFS can organize and display the files as if they are stored on one computer. This provides a simplified interface for clients who are given access to the files. Users can also share files by copying them to a directory in the DFS and can update files by editing existing documents. Most distributed file systems provide concurrency control or file locking, which prevents multiple people from editing the same file at the same time. There are several types of DFSes, though some of the most common implementations include Server Message Block (SMB), Apple Filing Protocol (AFP), and Microsoft's Distributed File System (often called "MS-DFS" or simply "DFS"). DFS is included as a standard component of Windows Server, while SMB is typically installed on Linux machines. Updated October 10, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dfs Copy

DHCP

DHCP Stands for "Dynamic Host Configuration Protocol." DHCP is a protocol that automatically assigns a unique IP address to each device that connects to a network. With DHCP, there is no need to manually assign IP addresses to new devices. Therefore, no user configuration is necessary to connect to a DHCP-based network. Because of its ease of use and widespread support, DHCP is the default protocol used by most routers and networking equipment. When you connect to a network, your device is considered a client and the router is the server. In order to successfully connect to a network via DHCP, the following steps must take place. When a client detects it has connected to a DHCP server, it sends a DHCPDISCOVER request. The router either receives the request or redirects it to the appropriate DHCP server. If the server accepts the new device, it will send a DHCPOFFER message back to the client, which contains the client device's MAC address and the IP address being offered. The client returns a DHCPREQUEST message to the server, confirming it will use the IP address. Finally, the server responds with a DHCPACK acknowledgement message that confirms the client has been given access (or a "lease") for a certain amount of time. DHCP works in the background when you connect to a network, so you will rarely see any of the above steps happen. The time it takes to connect via DHCP depends on the type of router and the size of the network, but it usually takes around three to ten seconds. DHCP works the same way for both wired and wireless connections, which means desktop computers, tablets, and smartphones can all connect to a DHCP-based network at the same time. Updated August 19, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dhcp Copy

Dial-up

Dial-up Dial-up refers to an Internet connection that is established using a modem. The modem connects the computer to standard phone lines, which serve as the data transfer medium. When a user initiates a dial-up connection, the modem dials a phone number of an Internet Service Provider (ISP) that is designated to receive dial-up calls. The ISP then establishes the connection, which usually takes about ten seconds and is accompanied by several beeping an buzzing sounds. After the dial-up connection has been established, it is active until the user disconnects from the ISP. Typically, this is done by selecting the "Disconnect" option using the ISP's software or a modem utility program. However, if a dial-up connection is interrupted by an incoming phone call or someone picking up a phone in the house, the service may also be disconnected. In the early years of the Internet, especially in the 1990s, a dial-up connection was the standard way to connect to the Internet. Companies like AOL, Prodigy, and Earthlink offered dial-up service across the U.S., while several smaller companies offered local dial-up Internet connections. However, due to slow speeds (a maximum of 56 Kbps), and the hassle of constantly disconnecting and reconnecting to the ISP, dial-up service was eventually replaced by DSL and cable modem connections. Both DSL and cable lines, known as "broadband" connections, offer speeds that are over 100 times faster than dial-up and provide an "always on" connection. While we don't get to listen to the fun buzzing and beeping noises of older modems anymore, it certainly is nice to download data in a fraction of the time. Updated April 7, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dialup Copy

Dialog Box

Dialog Box A dialog box is a type of pop-up window that presents a computer user with some information and requests their feedback. Every operating system with a graphical user interface uses dialog boxes to display information, get user input, or ask for confirmation before it carries out a command. Some dialog boxes are simple, displaying a message and providing OK and Cancel buttons to confirm or dismiss the changes, while others include one or more options for the user to fill out. Dialog boxes may appear for many reasons. Some simple dialog boxes notify you that a command may result in data loss, like closing an application without saving a file you're working on. In these cases, you'll only have a few options represented by some buttons — save the file, close the application without saving, or cancel the quit command to return to the app. Other dialog boxes present you with more information, like a file transfer dialog box that shows a progress bar alongside a button to cancel the transfer. A Save As dialog box Other dialog boxes present you with some options to fill out. Two of the most common examples are an application's Open and Save dialog boxes. These include file browsers that allow you to choose a file to open or a directory to save to, a drop-down menu to select a specific file type, and a text field to enter a file name. Another common example is the Print dialog box, which lets you choose a printer and customize print settings. An application's Preferences or Options dialog box often includes so many options that the app organizes them into tabs. These may also include an Apply button, which applies the changes you've selected without closing the dialog box. Updated November 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dialog_box Copy

Digital

Digital Digital refers to information represented as ones and zeros, or binary code. Modern electronic devices store, process, and transmit data digitally. For example, computers, smartphones, HDTVs, and AVRs are all digital devices since they process digital data. While ones and zeros may seem rudimentary, digital data can represent almost anything you can imagine, from text to images, to video, to generative AI chatbots. As a simple example, the letter "T" is represented digitally in ASCII code as 01010100. The word "TechTerms" is represented as: 01010100 01100101 01100011 01101000 01010100 01100101 01110010 01101101 01110011 Each 1 or 1 is a bit, while each 8-bit set is a byte. If it takes one byte to represent a single character, you can imagine how many bytes it takes to store a document, image, or video. That's why data storage is often measured in megabytes and gigabytes. Digital data is represented using 1s and 0s Digital vs. Analog Humans perceive the world in analog. When you view the world around you, you see images and hear sounds as continuous streams of information. Digital devices, however, convert this continuous information into discrete binary code. The accuracy of the digital representation involves two key factors: Resolution: Refers to the number of pixels captured in a digital image. Higher resolution images have more pixels. Bit Depth: Determines the amount of color information captured for each pixel. Higher bit depths allow for more colors and finer gradations. Higher resolutions and bit depths result in more accurate digital representations of analog data. For example, a basic digital camera might capture photos with a 10-megapixel resolution and 8-bit depth. A professional digital camera may capture images with a 40-megapixel resolution and 14-bit depth per color channel. Photos taken with the professional digital will appear detailed and vibrant compared to those captured with a standard camera. Advantages of Digital Information Although digital signals are estimations of analog signals, they offer significant advantages: Precision: Digital data can be copied, edited, and moved without any loss in quality. Editing digital audio and video is much faster and less expensive than editing analog media. Durability: Digital information is less susceptible to degradation over time compared to analog data. For example, the sound from an analog record or cassette tape may change the more it is played, while the sound from a CD will stay the same. Speed: Digital transmission lines, such as fiber optic cables, offer data transfer rates that are hundreds of times faster than analog lines. This increased bandwidth is what allows millions of people to communicate over the Internet in realtime. The benefits of digital technology make it the preferred method for storing and processing information in today's world. Updated August 5, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digital Copy

Digital Asset

Digital Asset A digital asset is a digital entity owned by an individual or company. Examples include digital photos, videos, and songs. These assets are not tangible, meaning they have no physical presence. Instead, they are files that reside on storage device, such as a local computer or a cloud-based storage network. The term "digital asset" ascribes legal ownership and value to a digital file. For example, if you pay $1.29 to download a song from iTunes, it becomes a digital asset since you own the song. Any stock photos you purchase are digital assets since you own the usage rights. When you buy a software application, it becomes a digital asset since you own a license to use the software. "Digital asset" is also important in regards to copyright law. While you own a song or video that you purchase online, the publisher or artist still owns the copyright to the content. This means you cannot copy, distribute, or sell the digital content. Similarly, you own the copyright to photos that you take with your digital camera or smartphone. Digital Assets and the DMCA Unlike tangible assets, digital assets are easy to copy and share. Therefore, it is essential to assign legal rights to digital data. The United States government passed the Digital Millennium Copyright Act (DMCA) in 1998, which help protects digital data and intellectual property. It criminalizes the unauthorized dissemination of copyrighted digital content. It also requires individuals and companies to stop publishing or sharing digital data once they have been informed of a copyright infringement. This law has helped protect digital media and unique content on the web. NOTE: Since digital data is saved in a binary format, digital assets are simply ones and zeros saved to a storage device. This may seem trivial, but millions or billions of ones and zeros can be used to store all types of digital content, including source code, documents, images, videos, webpages, and applications. These are all digital assets, which are legally protected by U.S. and international copyright law. Updated August 10, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digital_asset Copy

Digital Camera

Digital Camera A digital camera is a similar to a traditional film-based camera, but it captures images digitally. When you take a picture with a digital camera, the image is recorded by a sensor, called a "charged coupled device" or CCD. Instead of saving the picture on analog film like traditional cameras, digital cameras save photos in digital memory. Some digital cameras have built-in memory, but most use an SD or Compact Flash card. Digital cameras have several advantages over their analog counterparts. While film rolls typically hold about 24 pictures, memory cards have the capacity to store several hundred or even several thousand pictures on a single card. Therefore, photographers can be much more liberal in the shots they take. Since the images are captured digitally, unwanted images can be deleted directly on the camera. Most digital cameras also include a small LCD screen that shows a live preview of the image, which makes it easier to capture the perfect picture. These cameras usually include an option to record video as well. In the past, people would need to drop off their film at a photo processing location in order to get their pictures developed. With digital cameras, you can simply import the pictures to your computer via a USB cable. Once the digital photos have been imported, you can publish them online or e-mail them to friends. You can also edit them using photo editing software. If you want to print hard copies of your photos, you can use a home printer or an online printing service. Digital cameras range greatly in size in quality. On the low end are portable electronic devices such as as cell phones and iPods that have digital cameras built into them. In the mid range are are standalone point-and-shoot cameras that have additional features and picture taking modes. On the high end are digital SLR (single lens reflex) cameras, which support interchangeable lenses. These cameras are used by photography professionals and capture high resolution images with accurate color. While early digital cameras did not perform as well as their film-based counterparts, modern digital cameras can now capture even higher quality images. Today's point-and-shoot cameras now have resolutions in excess of 10 megapixels, which allows them to capture crystal clear images. They also focus and capture images much faster than before, which gives them the responsiveness of analog cameras. These improvements, along with the many advantages of digital photography, are why nearly all photographers have gone digital. Updated November 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digitalcamera Copy

Digital Footprint

Digital Footprint A digital footprint is a trail of data you create while using the Internet. It includes the websites you visit, emails you send, and information you submit to online services. A "passive digital footprint" is a data trail you unintentionally leave online. For example, when you visit a website, the web server may log your IP address, which identifies your Internet service provider and your approximate location. While your IP address may change and does not include any personal information, it is still considered part of your digital footprint. A more personal aspect of your passive digital footprint is your search history, which is saved by some search engines while you are logged in. An "active digital footprint" includes data that you intentionally submit online. Sending an email contributes to your active digital footprint, since you expect the data be seen and/or saved by another person. The more email you send, the more your digital footprint grows. Since most people save their email online, the messages you send can easily remain online for several years or more. Publishing a blog and posting social media updates are another popular ways to expand your digital footprint. Every tweet you post on Twitter, every status update you publish on Facebook, and every photo you share on Instagram contributes to your digital footprint. The more you spend time on social networking websites, the larger your digital footprint will be. Even "liking" a page or a Facebook post adds to your digital footprint, since the data is saved on Facebook's servers. Everyone who uses the Internet has a digital footprint, so it is not something to be worried about. However, it is wise to consider what trail of data you are leaving behind. For example, understanding your digital footprint may prevent you from sending a scathing email, since the message might remain online forever. It may also lead you to be more discerning in what you publish on social media websites. While you can often delete content from social media sites, once digital data has been shared online, there is no guarantee you will ever be able to remove it from the Internet. Updated May 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digital_footprint Copy

Digital Forensics

Digital Forensics Digital forensics is a subset of forensics — the collection and analysis of criminal evidence — that focuses on digital information. Examples include retrieving data from a computer, analyzing the metadata of specific files, and monitoring network traffic. A common first step in the digital forensics process is accessing data from an electronic device. For example, a laptop, smartphone, or cloud storage account may contain data that pertains to a specific investigation. In some cases, the data is easily accessible, while in others, the data may be encrypted or require a login. Digital forensics may also involve recovering data from a damaged or partially erased storage device. After gaining access to the data, investigators may search or browse through folders and files to find relevant information. For instance, they may look for emails or text messages sent at a specific time. Images or videos may also provide useful evidence. The metadata of certain files can be helpful as well. For example, an email header may include the IP address of the device that sent the message. Image EXIF data may include GPS location information, as well as the date and time the image was taken. While digital forensics typically focuses on recovering existing data, it may also involve monitoring data in realtime. For instance, a detective may use a packet sniffer to record packets sent over an individual's wireless connection. Data Authenticity Regardless of the process used to recover digital information, the authenticity of the data is paramount in criminal cases. Since digital information can easily be copied or modified, it is often difficult to verify original data. Corroborating evidence may be required to ensure that digital data has not been altered. Examples include copies of data from multiple devices and log files from a server. Updated July 14, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digital_forensics Copy

Digital Signature

Digital Signature A digital signature is a method of authenticating a message or document. A digital signature confirms that a message or document originated with a specific signer, and that it has not been altered since it was signed. They are often used with PDF documents, email messages, and word-processing documents to authenticate the signer and recipient. Digital signatures require a form of identification, called a digital certificate, that authenticates the signer's identity. These certificates come from a certificate authority (CA), which validates someone's identity before issuing their certificate. Once you have a digital certificate, you can register it with applications that support digital signatures, like Adobe Acrobat (for signing PDF documents) or Microsoft Outlook (for signing email messages). Signing a document, and verifying its digital signature When you sign a message or document, the signing application first uses a hashing algorithm to generate a hash from the file, which is then encrypted using your private key. The application combines this value with your digital certificate and public key into the digital signature, then packages it with the document or message. Later, the recipient can verify your identity by decrypting the hash using your public key, then running the hashing algorithm on the received message or document. If the two hashes match, the message or document they received is identical to the one you signed. The certificates included in digital signatures are themselves digitally signed by the issuing certificate authority. If the software reading a signed document already trusts the CA and has its public key, it can verify its signature without going online for an additional verification check. In many countries, digital signatures are equivalent to handwritten signatures and carry the same legal weight. They may both certify and approve legal and business documents. The document's author creates the certifying signature to show that it has not changed since signing. Other signatories may add their approval signature later to indicate that they have read and approved the document. Updated April 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/digitalsignature Copy

Digitize

Digitize To digitize something means to convert it from analog to digital. For example, an analog audio signal received by a microphone is digitized when it is recorded by a computer. Computers must digitize analog input because they are digital devices and cannot process analog information. Digital recordings are created by taking samples of analog data, typically at the rate of several thousand per second. For example, CD audio is digitized using a sample rate of 44.1 kilohertz (kHz). That means the analog signal is sampled at 44,100 times every second. The greater the sampling rate, the better the quality (or higher the fidelity) of the digitized file. Analog data is digitized using a device called an ADC or "analog-to-digital converter." This may be a standalone box or simply a small chip inside a computer. When converting professional audio or video from analog to digital, it is important to use a high-quality ADC to maintain the character of the original recording. The opposite of an ADC is a DAC, which converts digital data to analog. Since digitizing takes small samples an analog signal, the result is an estimation of the original data. Therefore analog data is actually more accurate than digitized data. However, as long as a high enough sampling rate is used, humans perceive digitized audio and video as a steady stream of analog information. Because computers can edit digital data and copy digital files with no loss of quality, most of today's media is created and saved in a digital format. Updated November 28, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/digitize Copy

DIMM

DIMM Stands for "Dual In-line Memory Module." A DIMM is a small circuit board that holds memory chips, and is also known as a RAM stick. A DIMM fits into a corresponding slot on a computer motherboard, allowing the RAM configuration in a computer to change when needed. The amount of RAM available on a DIMM depends on the number of memory chips on it and the individual capacity of each chip, so one DIMM may have more memory than another by having more memory chips, higher-capacity memory chips, or both. A half-size variant, called SO-DIMM, is designed for smaller computers like laptops. As new, faster types of RAM are developed, motherboard chipsets are updated as well to support them. To ensure that the RAM used on a DIMM is compatible with the rest of the system, each generation has a different physical design. A notch along the bottom of the circuit board changes position with every new design, and a corresponding notch in the slot prevents the wrong type of DIMM from being inserted. These notches also help guide installation, ensuring that the DIMM is installed in the correct direction. A Crucial DDR3 DIMM (top) and a Crucial DDR4 DIMM (bottom), with different numbers of contact pins and a relocated notch DIMMs replaced a previous memory module standard, called SIMMs. SIMMs utilize a 32-bit data bus to the CPU, while DIMMs use a 64-bit bus to transfer data twice as fast. A SIMM only has one set of redundant electrical contacts that fit into the slot, while each side of a DIMM has a unique set of electrical contacts. NOTE: In addition to changing the location of the notch, some designs also increase the number of contact pins along the bottom of the circuit board—A DDR3 DIMM has 240 pins, for example, while a DDR4 DIMM has 288. Updated September 30, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dimm Copy

Diode

Diode A diode is an electrical component designed to conduct electric current in only one direction. It has two ends (or terminals), each with an electrode of a different charge. The "anode" end has a positive charge relative to the negatively charged "cathode" end. Current naturally flows in the direction from the anode to the cathode. Diodes commonly serve as switches, allowing or preventing the flow of current. For example, current can be stopped by simply reversing an active diode inside an electrical circuit. Flipping the diode back to the original position will allow electricity to continue flowing through the circuit. Multiple diodes within a circuit can be used as logic gates, performing AND and OR functions. While current generally flows through a diode in one direction, in some cases the current can be reversed. If the negative voltage applied to a diode exceeds the "breakdown voltage," the current will begin flowing in the opposite direction. The breakdown voltage of a typical diode ranges from -50 to -100 volts, though the amount can be significantly less or more based on the design and materials used in the diode. Some diodes can be damaged by the reverse flow of current, while others are designed to allow current to flow in both directions. Zener diodes, for instance, are designed with specific breakdown voltages for different applications. Another common type of diode is a light-emitting diode, or LED. Light emitting diodes generate visible light when current passes between the anode and cathode though a space called the p-n junction. The electrical charge in this space produces light in different colors depending on the charge and materials used in the diode. Updated March 15, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/diode Copy

Direct Digital Marketing

Direct Digital Marketing Direct digital marketing, also known as "DDM," is a type of marketing that is done exclusively through digital means. It may be used to supplement or even replace traditional physical marketing strategies. The primary channels of direct digital marketing include e-mail and the Web. While most of us still receive an abundance of physical marketing materials in our mailboxes each week, many of these mailings have been replaced by e-mail. By using e-mail marketing, companies can drastically reduce their mailing costs, since the cost of sending e-mail messages is essentially free. Compare this to mailing physical brochures that may cost $0.50 per recipient. If a company sends out one million mailings, using e-mail could save the company $500,000 in mailing costs. While e-mail marketing is great asset for many businesses, it can also be abused. Since it doesn't cost anything to send e-mail messages, it is possible to distribute unsolicited messages to large lists of recipients at little to no cost. This kind of unwanted electronic junk mail has become widely known as "spam." Fortunately, junk mail filters have helped reduce the impact of these messages for most users. Many companies and organizations also offer an "unsubscribe" option in their mailings, which allow users to remove themselves from the mailing lists. The Web is another popular medium for direct digital marketing. Many companies now advertise on websites through banner ads, text links, and other types of advertisements. By using Web marketing, companies can drive visitors directly to their website with a single click. This provides a tangible benefit over print and television advertising, which may fully not capture a viewer's interest. Additionally, companies can target their ads on pages with relevant content using contextual ad placement services, such as Google AdSense. This allows businesses to attract people who are the most likely to be interested in the products or services they offer. In the past few years, DDM has revolutionized the marketing industry. By using digital communications, businesses can advertise in several new ways that were not possible before. While e-mail and the Web remain the most popular mediums for DDM, digital marketing continues to expand into other areas as well. Mobile phones and video games are already being used for DDM and you can expect many other mediums to follow. Updated October 29, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/directdigitalmarketing Copy

Direct3D

Direct3D Direct3D is a 3D graphics API for Windows computers and Xbox game consoles. It is part of the DirectX API suite that allows game developers to use a common set of commands to control hardware from many different manufacturers. Developers can use Direct3D to create 3D graphics and animations, including advanced shaders and lighting effects. It is one of the most widely-used 3D graphics APIs, along with Vulkan (the successor to OpenGL) and Apple's Metal API (exclusive to macOS and iOS devices). The Direct3D API provides an abstraction layer between the game software and the GPU hardware. A developer includes a standard API command in their game's source code to perform a function— for example, applying a specific texture to a 3D object or changing the brightness of a light source. The API passes that command on to the GPU's driver, which translates it into the complicated machine code that the GPU uses to render the correct image. The developer does not need to write the code to control the GPU directly, saving them significant time and allowing their game to run on any compatible hardware. Every GPU that supports Windows includes Direct3D-compatible drivers. The Direct3D API helps developers create 3D games like Starfield Direct3D and the rest of the DirectX suite were first introduced as a feature of Windows 95 and have received regular updates ever since. Important features added to Direct3D over the years include pixel shaders, multithreaded rendering, and the ability to run non-graphical compute operations on GPU hardware. Direct3D 12, released as part of a Windows 10 update, is the first low-level version of Direct3D. It provides more direct control over a GPU and its resources, allowing games to run more efficiently but requiring more careful work by the developers to deal with the extra complexity. Updated October 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/direct3d Copy

Directory

Directory A directory is another name for a folder. File systems use directories to organize files within a storage device, such as an HDD or SSD. For example, system files may be located in one directory, while user files may be stored in another. While directories often contain files, they may also contain other directories, or subdirectories. The user folder, for instance, may include directories such as Documents, Pictures, and Videos. Each of these directories may contain files and other subdirectories. This resulting directory structure, represented visually, would look like an upside-down tree. The top-level directory of a volume that contains all other directories is aptly labeled the root directory. The location of an individual file or folder within a directory can be represented by a directory path, such as C:Program Files (x86)GoogleChromeApplication. As you browse through your file system, whenever you open a subdirectory, it is called "moving down a directory." If you open the folder that contains the current directory, it is called "moving up a directory." Directory vs Folder The terms "directory" and "folder" can be used interchangeably. However, folders are technically the visual representation of a directory. In other words, a folder is an icon with a name that represents a directory in the file system. Updated August 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/directory Copy

DirectX

DirectX DirectX is a set of APIs that help software developers make video games for Windows PCs and Xbox game consoles. The DirectX APIs function as a middle layer between the game software and the computer's hardware, allowing developers to use a basic set of commands to control countless possible system configurations. Windows device drivers for video cards, sound cards, and game controllers include DirectX support, which lets the API interact with and control the hardware. DirectX is not a single API — it is a family of APIs that each manage a single type of hardware. The Direct3D API, for example, controls a computer's GPU to render real-time 3D graphics. Direct2D renders 2D bitmap and vector graphics using a GPU's hardware graphics acceleration. The XAudio and XACT APIs control audio output. Games like Red Dead Redemption 2 use DirectX for 3D graphics Creating games using DirectX requires that the developer use of DirectX software development kit (SDK). Since both Windows computers and Xbox game consoles use DirectX, developers can use it to make games for both platforms at once. And, because DirectX is built-in to Windows and every device driver, the end user doesn't need to install anything except for their games. Updated March 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/directx Copy

Disk Drive

Disk Drive A disk drive is a device that reads and/or writes data to a disk. The most common type of disk drive is a hard drive (or "hard drive"), but several other types of disk drives exist as well. Some examples include removable storage devices, floppy drives, and optical drives, which read optical media, such as CDs and DVDs. While there are multiple types of disk drives, they all work in a similar fashion. Each drive operates by spinning a disk and reading data from it using a small component called a drive head. Hard drives and removable disk drives use a magnetic head, while optical drives use a laser. CD and DVD burners include a high-powered laser that can imprint data onto discs. Since hard drives are now available in such large capacities, there is little need for removable disk drives. Instead of expanding a system's storage capacity with removable media, most people now use external hard drives instead. While CD and DVD drives are still common, they have become less used since software, movies, and music can now often be downloaded from the Internet. Therefore, internal hard drives and external hard drives are the most common types of disk drives used today. Updated September 14, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/diskdrive Copy

Disk Image

Disk Image A disk image is a file that contains an exact copy of a disk's contents. It replicates all data from the source disk, including the folder structure and hidden metadata. Since a disk image contains a file system, an operating system must first mount one as an individual volume or virtual disk before accessing its data. Disk images are used to create exact backups of optical discs, hard drives, flash drives, and any other disk. For example, you can use a disk utility to create a disk image of your startup disk to back up your personal data. If the startup disk is damaged, you can recover the entire thing with a disk image. You can also create a disc image of a CD or DVD. NOTE: Disc images created from optical media are generally referred to as "disc images" rather than "disk images." Examples MacOS software installers are often distributed on disk image files since the format supports cryptographic signing, ensuring that the files on the disk image are the ones the developer put there. Virtual machines also use disk image files to mount drives used by the guest operating system. A macOS app distributed on a disk image Making a disk image of a computer's hard drive or SSD is a common way to create a backup, allowing you to restore the entire disk in the event of data corruption or other disasters. You can also use disk images to clone a computer for deployment. For example, if you're setting up a computer lab with dozens of identical systems that will occasionally need to be restored to a default configuration, a disk image streamlines the process. Digital forensics professionals also use disk images to examine the contents of a disk without changing or damaging the original version. View a list of disk image file formats at FileInfo.com. File extensions: .ISO, .BIN, .DMG Updated March 29, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/disk_image Copy

DisplayPort

DisplayPort DisplayPort is a digital video and audio interface used to connect computers and monitors, standardized by the Video Electronics Standards Organization (VESA). The standard defines the DisplayPort connector and a smaller version called Mini DisplayPort. DisplayPort replaced the previous digital connection standard, DVI. DisplayPort is a fully-digital interface that sends video, audio, and data signals as data packets — making it more similar to Ethernet or USB than to previous display connections. Transferring everything over a single cable allows a computer to connect to a monitor with built-in speakers and USB ports. Updates to the DisplayPort standard have increased its capabilities over time; a single DisplayPort 2.0 connection can provide a compressed video signal to a 16K (15360 x 8640) display at 60 Hz or two 8K (7680 x 4320) displays at 120 Hz. DisplayPort is also part of the Thunderbolt interface standard. The first two generations of Thunderbolt integrated a DisplayPort signal with a PCIe interface and power connection through a Mini DisplayPort connector; 3rd- and 4th-generation Thunderbolt interfaces send those signals through a USB-C connector. Comparison to HDMI DisplayPort and HDMI connections have similar capabilities and are often found together on computer graphics cards. DisplayPort is designed only for computers, while HDMI is also meant for A/V equipment like televisions, audio receivers, and Blu-ray players. They both can send video and audio over a single cable, but DisplayPort also transmits USB signals. DisplayPort and HDMI have increased speeds and capabilities through several updates; any DisplayPort cable will support the most recent standard, while an updated HDMI device needs an HDMI cable of the same generation. An HDMI port (left) and a DisplayPort (right) DisplayPort supports higher resolutions and refresh rates than HDMI, but those differences only matter on high-end displays. Because HDMI is good enough in most cases, many monitors include only HDMI ports instead of DisplayPorts. Updated October 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/displayport Copy

Dithering

Dithering Dithering is a process that uses digital noise to smooth out colors in digital graphics and sounds in digital audio. Digital Graphics All digital photos are an approximation of the original subject, since computers cannot display an infinite amount of colors. Instead, the colors are estimated, or rounded to the closest color available. For example, an 8-bit GIF image can only include 256 (2^8) colors. This may be enough for a logo or computer graphic, but is too few colors to accurately represent a digital photo. (This is why most digital photos are saved as 16 or 24-bit JPEG images, since they support thousands or millions of colors.) When digital photos contain only a few hundred colors, they typically look blotchy, since large areas are represented by single colors. Dithering can be used to reduce this blotchy appearance by adding digital noise to smooth out the transitions between colors. This "noise" adds makes the photo appear more grainy, but gives it a more accurate representation since the colors blend together more smoothly. In fact, if you view a dithered 256-color image from far away, it may look identical to the same image that is represented by thousands or millions of colors. Digital Audio Like digital images, digital audio recordings are approximations of the original analog source. Therefore, if the sampling rate or bit depth of an audio file is too low, it may sound choppy or rough. Dithering can be applied to the audio file to smooth out the roughness. Similar to dithering a digital image, audio dithering adds digital noise to the audio to smooth out the sound. If you view a dithered waveform in an audio editor, it will appear less blocky. More importantly, if you listen to a dithered audio track, it should sound smoother and more like the original analog sound. Summary Several types of dithering algorithms are used by various image and audio editors, though random dithering is the most common. While dithering is often used to improve the appearance and sound of low quality graphics and audio, it can also be applied to high quality images and recordings. In these situations, dithering may still provide extra smoothness to the image or sound. Updated June 10, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dithering Copy

DKIM

DKIM Stands for "DomainKeys Identified Mail." DKIM is an email authentication technology that verifies a message was sent from a legitimate user of an email address. It is designed to prevent email forgery or spoofing. DKIM works by attaching a digital signature to the header of an email message. The header is generated by the outgoing mail server and is unique to the domain hosted on the server. The receiving mail server can check the header against a public key stored in the sending server's DNS record to confirm the authenticity of the message. Many popular email services like Gmail, Yahoo! Mail, and Outlook use DKIM by default. Other email accounts, such as those set up on web servers may require DKIM to be manually activated. For example, cPanel – a popular Linux web server application – allows an administrator to activate DKIM in the Email → Authentication section of the cPanel interface. Once DKIM is enabled, it is activated for all users automatically. While DKIM provides a simple way to verify a message has been sent from the corresponding domain, it is not a foolproof solution. For example, the receiving mail server must also support DKIM or the header information will be ignored. Additionally, messages with a valid signature can be forwarded or resent from another email address. It is also important to note that DKIM is designed to authenticate messages, not prevent spam. While a valid DKIM header may mean a message is less likely to be spam, it has no relation to the content of the message. History The DomainKeys Identified Mail specification was created in 2005 when Yahoo! and Cisco merged their respective DomainKeys and Identified Internet Mail into a single solution. It was published by the Internet Engineering Task Force (IETF) the same year and has been in use ever since. NOTE: DKIM is commonly used along with SPF (Server Policy Framework), though the two verification methods are completely separate. Updated January 6, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dkim Copy

DLC

DLC Stands for "Downloadable Content." DLC refers to additional content that can be downloaded within a video game. It has become a common feature in PC, console, and mobile games. The most common type of downloadable content is extra maps or levels that extend the gameplay of the original game. For example, Activision provides Modern Warfare players with new downloadable levels every few months. The company also releases new songs for its Guitar Hero series on a regular basis. By downloading new levels or songs, players can continue to enjoy new challenges after completing the original game. Another popular type of DLC includes extra items that can be incorporated into the game. For instance, Capcom allows Street Fighter IV players to download custom outfits for their favorite players. Microsoft provides additional vehicles that can be downloaded by Forza Motorsport 3 users. Epic Games provides Gears of War 3 players with new characters that can be added to the game. While some downloadable content is offered for free, most DLC must be purchased. The cost of downloadable content packs is typically much less than the price the original game, though multiple DLC purchases may surpass the cost of the game itself. Therefore, DLC has become a common way for software developers to generate a continual long-term revenue stream from video games. NOTE: While DLC first became popular on gaming consoles, it soon progressed to PC games, and then to mobile devices. Now, many mobile apps offer "in-app purchases," which is synonymous with DLC. Updated September 6, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dlc Copy

DLL

DLL Stands for "Dynamic Link Library." A DLL is a compiled library of functions, data, and other resources that programs running in Windows may use. Any program can access the code in a DLL, and multiple programs may access a DLL at the same time. Accessing a shared code library means that programs don't need to include that code in their executable files, which helps them to use less storage space and system memory. When a program needs to access the functions and resources in a DLL file, it loads it into system memory at runtime and creates links to the resources it needs. This process is called dynamic linking, and it allows the program to use the functions, data, and resources in the DLL as if they were included in the program itself. However, a program that depends on dynamic linking won't run if any of its DLL files are missing. Windows includes thousands of DLL files containing system functions and resources Windows includes many DLL files that contain basic system resources like API functions, device drivers, and user interface elements. Any program running on Windows may access these DLL files to use those shared resources. Third-party programs may install additional DLL files, offloading functions and data to allow the program to run more efficiently. DLL files may also receive updates separately from the programs that call them to fix bugs without recompiling the entire application. NOTE: The opposite of dynamic linking is static linking, which links a program's libraries and dependencies to the executable file during compilation. The resulting executable file is larger but runs without any additional files. File Extensions: .DLL Updated May 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dll Copy

DMA

DMA Stands for "Direct Memory Access." DMA is a method of transferring data between a computer's system memory and another hardware component that does not require the CPU to process the data. It uses the CPU only to initiate a data transfer; after that, the other component can communicate with the system RAM directly without any further involvement from the CPU. DMA reduces the overhead involved in data transfers, freeing up the CPU to continue processing other operations instead. DMA is used by many types of hardware that have their own data processing capabilities. For example, a GPU rendering a 3D scene in a game uses DMA to load textures directly from system memory, leaving the CPU free to calculate physics and AI instead. Without DMA, the CPU needs to be involved in every read-and-write action, occupying processing resources that could be used for other tasks. Storage controllers, sound cards, and network interface cards also use DMA during data transfers. DMA transfers data directly between devices, while non-DMA transfers require CPU processing Early computers used dedicated DMA controller chips to manage data transfers. These controllers assigned a limited number of DMA addresses to devices capable of DMA transfers. In modern computers, each DMA-compatible device includes an integrated DMA engine responsible for coordinating with other devices and managing data transfers over the PCI Express bus. Updated October 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dma Copy

DMZ

DMZ Stands for "Demilitarized Zone." In computing, a DMZ is a section of a network that exists between the intranet and a public network, such as the Internet. It may contain a single host or multiple computer systems. The purpose of a DMZ is to protect an intranet from external access. By separating the intranet from hosts that can be accessed outside a local network (LAN), internal systems are protected from unauthorized access outside the network. For example, a business may have an intranet comprised of employee workstations. The company's public servers, such as the web server and mail server could be placed in a DMZ so they are separate from the workstations. If the servers were compromised by an external attack, the internal systems would be unaffected. A DMZ can be configured several different ways, but two of the most common include single firewall and dual firewall architectures. In a single firewall setup, the intranet and DMZ are on separate networks, but share the same firewall, which monitors and filters traffic from the ISP. In a dual firewall setup, one firewall is placed between the intranet and the DMZ and another firewall is placed between the DMZ and the Internet connection. This setup is more secure since it provides two layers of defense against external attacks. NOTE: The term "DMZ" or "Demilitarized Zone" comes from a military term used to describe a neutral area where military operations are not allowed to take place. These areas typically exist along the border between two different countries. They serve as a buffer and are designed to prevent unnecessary escalations of military action. Similarly, a DMZ is a neutral area within a computer network that can be accessed by both internal and external computer systems. Updated September 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dmz Copy

DNS

DNS Stands for "Domain Name System." Domain names serve as memorizable names for websites and other services on the Internet. However, computers access Internet devices by their IP addresses. DNS translates domain names into IP addresses, allowing you to access an Internet location by its domain name. Thanks to DNS, you can visit a website by typing in the domain name rather than the IP address. For example, to visit the Tech Terms Computer Dictionary, you can simply type "techterms.com" in the address bar of your web browser rather than the IP address (67.43.14.98). It also simplifies email addresses, since DNS translates the domain name (following the "@" symbol) to the appropriate IP address. To understand how DNS works, you can think of it like the contacts app on your smartphone. When you call a friend, you simply select his or her name from a list. The phone does not actually call the person by name, it calls the person's phone number. DNS works the same way by associating a unique IP address with each domain name. Unlike your address book, the DNS translation table is not stored in a single location. Instead, the data is stored on millions of servers around the world. When a domain name is registered, it must be assigned at least two nameservers (which can be edited through the domain name registrar at any time). The nameserver addresses point to a server that has a directory of domain names and their associated IP addresses. When a computer accesses a website over the Internet, it locates the corresponding nameserver and gets the correct IP address for the website. Since DNS translation creates additional overhead when connecting to websites, ISPs cache DNS records and host the data locally. Once the IP address of a domain name is cached, an ISP can automatically direct subsequent requests to the appropriate IP address. This works great until an IP address changes, in which case the request may be sent to the wrong server or the server will not respond at all. Therefore, DNS caches are updated regularly, usually somewhere between a few hours and a few days. Updated August 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dns Copy

DNS Record

DNS Record A DNS record is a plain text entry in a zone file that contains important information about a domain, and is an important part of the Domain Name System. A domain's zone file contains multiple DNS records that help translate human-readable domain names into machine-readable IP addresses, including a domain's name server and mail server information. A DNS record can also include domain aliases for forwarding domains and subdomains. Each DNS record in a zone file contains a single piece of information, which work together to provide a full set of instructions for accessing that domain and its resources. There are multiple types of DNS record that each provide different information. Some of the most common types of DNS record are listed below. NS records list the domain's name servers. A records list the domain's IPv4 address. AAAA records list the domain's IPv6 address. SOA (Start of Authority) records include authoritative information for a domain, including its primary name server and timers that instruct how often to refresh its DNS information. MX records list mail servers that handle email sent to the domain. CNAME records create aliases (or canonical names) that forward requests to another domain or subdomain. DNS records in a zone file typically follow the same pattern, starting with the domain or hostname, followed by a record type, and then listing details for that record type. Several examples are listed below. example.com   IN   NS   ns1.example.com example.com   IN   A   192.168.0.1 example.com   IN   AAAA   2001:db8:85a3::8a2e:370:7334 example.com   IN   MX   mail.example.com app.example.com   IN   CNAME   web.example.com Updated July 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dns_record Copy

DNSSEC

DNSSEC Stands for "Domain Name System Security Extensions." It is an extension of the standard domain name system (DNS), which translates domain names to IP addresses. DNSSEC improves security by validating the authenticity of the DNS data. The original domain name system was developed in the 1980s with minimal security. For example, when a host requests an IP address from a name server using a standard DNS query, it assumes the name server is valid. However, a name server can pretend to be another server by spoofing (or faking) its IP address. A fake name server could potentially redirect domain names to the wrong websites. DNSSEC provides extra security by requiring authentication with a digital signature. Each query and response is "signed" using a public/private key pair. The private key is generated by the host and the public key is generated by a DNS zone, or group of trusted servers. These servers create a chain of trust, in which they validate each other's public keys. Each DNSSEC-enabled name server stores its public key in a hashed "DNSKEY" DNS record. Enabling DNSSEC While DNSSEC is not required for web servers or mail servers, many web hosts recommend it. To configure DNSSEC, you must use a nameserver that supports it, like PowerDNS or Knot DNS. Then you must enable DNSSEC on your server and configure it within the control panel interface. If you are using a public nameserver, activating DNSSEC up may be as simple as clicking "Enable DNSSEC." If you are using a custom name server, you may need to manually create one or more delegation signer (DS) records. After you have enabled DNSSEC, it may take several hours to activate since the server must validate the DS records with other servers within the DNS zone. Updated June 27, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dnssec Copy

Dock

Dock The Dock is a feature of the Macintosh operating system that was introduced with Mac OS X. It is a virtual tray of icons that provides fast, one-click access to commonly used programs and files. By default, the Dock is displayed at the bottom of the Mac OS X desktop. It contains icons for several of the applications included with Mac OS X and always includes the Finder icon on the far left and the Trash icon on the far right. While the Dock has a default size and location, these options can be changed within the Dock System Preference pane. For example, the Dock can be moved to the left or right side of the screen. You can also change the size of the dock and the magnification percentage, which magnifies the icons as you roll over them with the cursor. If you want the Dock to only appear when you need it, you can select "Automatically hide and show the Dock," which will hide the Dock unless you move the mouse to the bottom of the screen. To open an application, file, or folder from the Dock, simply click the icon (you don't need to double-click items in the Dock). When you open an application, the icon will bounce while the program is opening. Once the program opens, the icon will have a dot underneath it, which indicates the application is running. You can also open files by dragging them to the appropriate application in the Dock. If the application is not already running, it will start up, then open the file. If you want to add items to the Dock, you can drag the corresponding icons to the Dock from open windows or the desktop. Note that applications are located on the left of the Dock and files and folders are located on the right side. Therefore, make sure you drag the icon to the correct side. When you move an icon to the Dock, a space will open for it and you can place it wherever you like. You can also move icons around by simply dragging them to different spots within the Dock. If you want to remove an icon from the Dock, simply drag it from the Dock to the desktop. You will see an animation involving a puff of smoke, which indicates the program has been removed. Since the icons in the Dock are only shortcuts to the original files and applications, the actual program or file will remain untouched, even after you remove the icon from the Dock. Updated December 16, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dock Copy

Docking Station

Docking Station A docking station, or dock, is a device that connects a laptop to multiple peripherals. It provides a single connection point that allows a laptop to use a connected monitor, printer, keyboard, and mouse. This allows a laptop to function like a desktop computer. Laptop manufacturers often build custom docking stations for their laptops. These docks usually have a proprietary input port that connects to a matching port on specific laptop models. Early docks, such as those built in the 1990s, included serial ports for connecting input devices, parallel ports for connecting printers and scanners, and VGA ports for connecting monitors. In recent years, laptop docking stations have become more standardized, with USB ports for connecting most peripherals and DVI ports for connecting displays. While modern docks provide standardized I/O ports, many docking stations still use a proprietary dock connector, which means when you buy a new laptop, you may need to buy a new dock. Fortunately, the Thunderbolt connector, first used in Apple's MacBook laptops, eliminates the need for a docking station. A single Thunderbolt connection can support USB, FireWire, Ethernet, and DisplayPort connections. Therefore, a Thunderbolt hub serves the same purpose as a laptop dock and is compatible with any computer that has a standard Thunderbolt connection. NOTE: Docking stations may also refer to hardware used to connect tablets, smartphones, and other portable devices to one or more peripherals. However, these devices are generally called "docks" and typically have fewer I/O connections than a laptop dock. Updated July 11, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/docking_station Copy

Document

Document A computer document is a file created by a software application. While the term "document" originally referred specifically to word processor documents, it is now used to refer to all types of saved files. Therefore, documents may contain text, images, audio, video, and other types of data. A document is represented with both an icon and a filename. The icon provides a visual representation of the file type, while the filename provides a unique name for the file. Most document filenames also include a file extension, which defines the file type of the document. For example, a Microsoft Word document may have a .DOCX file extension, while a Photoshop document may have a .PSD file extension. Many software applications allow you to create a new document by selecting File → New from the menu bar. You can then edit the document and select File → Save to save the file to your hard disk. If you want create a copy of the document, most programs allow you to select File → Save As… to save the document with a different filename. Once you have saved a document, you can open it at a later time by double-clicking the file or by selecting File → Open… within the associated program. Updated November 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/document Copy

Document Grinding

Document Grinding Document grinding is the process of analyzing documents to extract meaningful data. The term is often associated with computer hacking, since hackers may "grind" documents to reveal confidential data. However, document grinding is also used for nonmalicious purposes. Examples include identifying unknown file types and viewing file metadata. It is possible to perform document grinding on both plain text and binary files. Text Files Grinding text files is a simple process since they store data as plain text. You can search for characters and strings within a text document using a tool like grep or another search utility. Since text processing is a relatively fast computer operation, it may be possible to grind several large documents in less than a second. Common text file types targeted for document grinding include log files (.LOG, .TXT) and configuration files (.CONF, .CNF). If a hacker gains access to a web server, for example, he may search these files for usernames, passwords, and other confidential data. Binary Files Binary files may contain some plain text, but they also store binary data — 1s and 0s. It is more difficult to grind binary data since it cannot be searched with a text search tool. Additionally, many binary files are saved in a proprietary file format, which is difficult to parse without the corresponding application. Therefore binary document grinding typically focuses on the header and footer of a document, which may contain plain text. It may also aim to extract file metadata. Many binary files contain information about the file type in the header of the file. For example, in the sample image, the letters "PNG" in the header indicate the file is a PNG image. This information is useful for identifying the file type since it does not have a file extension. Similarly, digital photos often contain hidden EXIF data saved when the photo was taken. An image-viewing program or a document grinding script may be able to detect and extract this information. Updated September 13, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/document_grinding Copy

DOM

DOM Stands for "Document Object Model." The DOM is a programming interface for accessing and modifying the content of webpages and XML documents. It defines the logical structure of a document, treating it like a tree where each branching node represents individual elements. Programs and scripts can access the DOM to read and modify the document's structure, content, and style. The DOM organizes the structure of HTML and XML files in a hierarchy of objects called "nodes." At the root is the Document node, which represents the document as a whole. Next are the Element nodes that represent individual HTML or XML tags. Element nodes are themselves hierarchical in the way that they're nested. For example, the node is closest to the root of the document and contains both the and nodes, and each of those contains child nodes like and elements. Each element node in the DOM includes several Attribute nodes that modify its properties. For example, an element node could include attribute nodes for src, alt, width, and height that would allow some JavaScript to modify any of those attributes. Finally, Text nodes contain the text within each element, making up the document contents. Text nodes are the leaves of the tree and have no further levels after them. The DOM hierarchy showing several elements and how they relate to each other Using JavaScript (or other scripting languages) to access a page's DOM allows you to create dynamic and interactive webpages. For example, event handlers attached to nodes can wait for user interactions like mouse clicks or keyboard input, then modify the page contents in response. JavaScript events can use the DOM to remove existing nodes and add new ones. Scripts can fetch data from a database on a server and use the DOM to insert that data into specific text elements. Scripts can also use the DOM to modify the CSS properties of elements and change the document's appearance and layout. Updated June 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dom Copy

Domain

Domain A domain is a group of computers on a network that can be centrally administered with a shared set of rules. Domains may be part of a local area network (LAN), a wide area network (WAN), or a virtual private network (VPN). Servers, known as domain controllers, manage users and user groups for every computer connected to their domains and are often used to manage large enterprise networks. Domains can help map a company's organizational structure to its network by creating domains for individual departments, divisions, or regions. Users in a domain may log in to any workstation in their domain. Administrators can assign users to user groups and choose what resources those users and groups may access. For example, you can configure a domain so that one set of computers, printers, phones, and shared folders is available to all users, while another set of resources is limited to a single user group. Domain controllers also enforce security policies and firewalls across a network and can restrict incoming or outgoing network traffic. Many domain controllers also act as DNS servers for their domains. They can assign hostnames to individual computers for identification instead of (or in addition to) IP addresses. Domains and network workgroups are similar, as both provide access to computers and resources within the group. However, workgroups are decentralized and do not rely on a server or domain controller. Domain controller software can run on several different platforms. Windows Server Active Directory is one of the most popular domain controllers, included as a component in the Windows Server operating system. Unix and Linux-based servers can use Samba Domain Controller, which is available as part of the Samba software suite. Updated May 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/domain Copy

Domain Name

Domain Name A domain name is a unique name that identifies a website. For example, the domain name of the Tech Terms Computer Dictionary is "techterms.com." Each website has a domain name that serves as an address, which is used to access the website. Whenever you visit a website, the domain name appears in the address bar of the web browser. Some domain names are preceded by "www" (which is not part of the domain name), while others omit the "www" prefix. All domain names have a domain suffix, such as .com, .net, or .org. The domain suffix helps identify the type of website the domain name represents. For example, ".com" domain names are typically used by commercial websites, while ".org" websites are often used by non-profit organizations. Some domain names end with a country code, such as ".dk" (Denmark) or ".se" (Sweden), which helps identify the location and audience of the website. Domain names are relatively cheap to register, though they must be renewed every year or every few years. The good news is that anyone can register a domain name, so you can purchase a unique domain name for your blog or website. The bad news is that nearly all domain names with common words have already been registered. Therefore, if you want to register a custom domain name, you may need to think of a creative variation. Once you decide on a domain name and register it, the name is yours until you stop renewing it. When the renewal period expires, the domain name becomes available for others to purchase. NOTE: When you access a website, the domain name is actually translated to an IP address, which defines the server where the website located. This translation is performed dynamically by a service called DNS. Updated September 14, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/domain_name Copy

Domain Suffix

Domain Suffix A domain suffix is the last part of a domain name, consisting of the final "." and two or more letters. A domain suffix is also known as a "top-level domain" or TLD. Popular domain suffixes include ".com," ".net," and ".org," but there are more than a thousand domain suffixes approved by ICANN. Each domain suffix is intended to define the type of website represented by the domain name. For example, ".com" domains are meant for commercial websites, while non-profit organizations use ".org" domains. Each country also has a unique domain suffix meant for websites within the country. For example, Brazilian websites may use the ".br" domain suffix, Chinese websites may use the ".cn" suffix, and Australian websites may use the ".au" suffix. A country code suffix does not always mean that a website is hosted in that country or primarily meant for that country's people. Some countries allow outside people and businesses to purchase domains using their country codes, with some TLDs popular in certain industries or communities. For example, Anguilla's ".ai" TLD is commonly used by technology companies specializing in artificial intelligence; ".me" (Montenegro), ".tv" (Tuvalu), and ".fm" (the Federated States of Micronesia) are other examples that see significant use outside of their countries. As the number of websites on the Internet continuously grows, the initial set of TLDs limits the namespace available for new domains. With most good ".com" domain names already taken in the late 1990s, people began pressuring ICANN to add new generic TLDs for businesses, personal websites, and other groups. They first added a new batch of generic TLDs in 2000 — including ".biz" and ".info." In 2013 ICANN began approving requests for many additional suffixes like ".art," ".dev," ".store," and ".app" that allow a website owner to choose a TLD for nearly any website niche. Updated January 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/domain_suffix Copy

Donationware

Donationware Donationware is software that is free to use, but encourages users to make a donation to the developer. Some donationware programs request a specific amount, while others allow users to determine what the program is worth and send in an appropriate donation. Unlike shareware, which may provide limited functionality until a registration key is purchased, donationware is fully functional. Therefore, donationware is more similar to freeware, which is free to use, but retains the author's copyright. Some donationware programs make subtle requests for donations, such as an option in the menu bar. Others are more blatant, and may prompt you for a donation each time you open the program. This dialog box should disappear once you have made a contribution. Most donationware programs allow you to donate to the developer via a credit card or PayPal account. If you find a certain donationware application program to be useful, you can be sure the developer will appreciation your donation. Just make sure that the website you use to donate is legitimate and provides a secure (HTTPS) connection. Updated May 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/donationware Copy

Dongle

Dongle A dongle is a small device, typically about the size of a flash drive, that plugs into a computer. Some dongles act as security keys while others serve as adapters. While early dongles connected to parallel ports on PCs and ADB ports on Macs, modern versions typically connect to a USB port. Security Keys Security dongles are used for copy protection are designed to prevent software piracy. For example, some high-end software applications, such as professional audio and video production programs, require a dongle in order to run. The dongle, which is included with the software, must be plugged in when you open the software program. If the correct dongle is not detected, the application will produce an error message saying a dongle is required in order to use the software. Adapters Certain types of adapters are also called dongles. For instance, a dongle may provide a laptop with different types of wired connections. Previous generations of laptops had expansion slots called PCMCIA ports that were too skinny to include an Ethernet jack. Therefore, a dongle was required. These types of dongles were typically one to three-inch cables that connected to the card on one end and had an Ethernet jack on the other. Modern Ethernet dongles have a similar appearance, but they usually connect to a USB or Thunderbolt port. Today, many dongles provide wireless capabilities. For example, USB Wi-Fi adapters are often called dongles. Since most computers now have built-in Wi-Fi chips, cellular data adapters, such as 3G and 4G dongles, are more prevalent. These types of dongles allow you to connect to the Internet via a cellular carrier like Verizon or AT&T even when Wi-Fi is not available. Updated September 13, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dongle Copy

DOS

DOS Stands for "Disk Operating System." DOS was an operating system used by the first IBM-compatible computers. It was a single-user, text-based operating system with a command-line interface. It was developed as a partnership between IBM and Microsoft, which resulted in two versions — Microsoft's MS-DOS and IBM's PC DOS — that were essentially the same operating system with two different brandings. It was included with the original IBM Personal Computer in 1981 and was the dominant PC operating system through the mid-1990s. Like the Unix operating system, DOS used a command line for its interface that required users to type each command they wanted to run. These commands were simple, like dir (list all files in a directory) and cd (change directory), but the user needed to know the proper commands for everything they wanted to do. Later versions of DOS included a basic file manager, known as the DOS Shell, that helped users manage files using a directory tree. The DOS command prompt DOS used the FAT file system and referenced its available disk drives using drive letters. It reserved the letters A and B for floppy disk drives and assigned a computer's primary hard disk drive the letter C. It used the rest of the alphabet for additional disk drives, optical drives, and RAM disks. Microsoft developed Windows as a graphical user interface shell that ran on top of DOS. Windows 1.0 through 3.11 required a user to first boot their computer into DOS, then launch Windows from the command line. Windows 95 and 98 allowed the user to boot directly into Windows and bypass DOS, although it still ran in the background. Users could also access the DOS command prompt in Windows through a dedicated MS-DOS Prompt program. NOTE: Later versions of Windows, based on the NT kernel, no longer run DOS in the background. However, they still use many conventions inherited from DOS. For example, disk drives still use letters for identification, and the command line (CMD.EXE) uses most of the same syntax as DOS. Updated March 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dos Copy

Dot Matrix

Dot Matrix A dot matrix is a 2D matrix of dots that can represent images, symbols, or characters. They are used for electronic displays, such as computer monitors and LED screens, as well as printed output. In a dot matrix display, the images are estimated using a discrete set of dots instead of lines and shapes. Therefore, the more dots that are used, the more clear and accurate the image representation will be. For example, a 16x16 dot matrix can represent the letter "S" more accurately than a 8x8 matrix. If enough dots are used, the image will appear as a contiguous display rather than a group of dots. This is because the human eye blends the dots together to create a coherent image. For example, newspaper print is made up of dot matrixes, but it is hard to notice unless you look very closely at the paper. Bitmap images on a computer screen are also dot matrixes, since they are made up of a rectangular grid of pixels. If you look closely enough at your monitor, you may even be able to see the dots that make up the image. But be nice to your eyes and don't stare too long! While "dot matrix" has a broad definition, it can also be used to describe a specific type of printer. Dot matrix printers, or "impact printers," were introduced in the 1970s. These printers typically use the kind of paper with small holes on each side that are used to feed the paper through the printer. They are called dot matrix printers because they use a matrix of dots to print each character. While they do not have a very high resolution, dot matrix printers are an effective way of printing basic text documents. Therefore, while most businesses now use inkjet or laser printers, some organizations still find dot matrix printers to be an efficient printing solution. Updated October 3, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dotmatrix Copy

Dot Pitch

Dot Pitch Dot pitch, or "pixel pitch," is a measurement that defines the sharpness of a display. It measures the distance between the dots used to display the image on the screen. This distance is very small and is typically measured in fractions of millimeters. The smaller the dot pitch, the sharper the picture. Dot pitch applies to both CRT monitors and flat-screen displays. While some large-screen CRTs have dot-pitches as high as 0.51 mm, most computer displays have a dot pitch between 0.25 and 0.28 mm. Similarly, most LCD displays have a dot pitch between 0.20 and 0.28 mm. Some high-end displays used for scientific or medical imagery have dot pitches as low as 0.15 mm. These displays usually cost several times as much as consumer displays of the same size. While the terms "dot pitch" and "resolution" are related, they have different meanings. A display's resolution refers to how many pixels can be displayed on the screen. For example, a 20" monitor may have a maximum resolution of 1680 x 1050 pixels. When a display is set to its native resolution (typically the maximum resolution), it may display exactly one pixel per dot. However, if the resolution is reduced, the pixels will be larger than the dots used to display the image on the screen. In this case, each pixel is mapped onto multiple dots. Updated December 5, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dot_pitch Copy

Double Click

Double Click Double clicking involves clicking your mouse button quickly two times. To perform a double click, and not just two clicks, the mouse button must be pressed twice within a very short time, typically about half a second. Most operating systems allow you to lengthen or shorten the maximum time allowed for a double click, using the Mouse Control Panel or System Preference. A double click is recognized by your computer as a specific command, just like pressing a key on your keyboard. Double clicking is used to to perform a variety of actions, such as opening a program, opening a folder, or selecting a word of text. In order to double click an object, just move the cursor over the item and press the left mouse button quickly two times. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/doubleclick Copy

Download

Download Download can be used as either a verb or a noun. As a verb, it refers to the process of receiving data over the Internet. Downloading is the opposite of uploading, or sending data to another system over the Internet. As a noun, download may refer to either a file that is retrieved from the Internet or the process of downloading a file. Every time you use the Internet, you download data. For example, each time you visit a webpage, your computer or mobile device must download the HTML, CSS, images, and any other relevant data in order to display the page in your web browser. When you click a "Download Now" link, your browser will start downloading a specific file that you can open. You can also download data using mediums besides the web. For example, you can download files using an FTP program, download email messages with an email client, and download software updates directly through your operating system. You can manually initiate a download (such as clicking a download link), though most downloads happen automatically. For example, your smartphone may download email messages and software updates in the background without you knowing it. While you can download a file, the word "download" may also refer to the file itself. A common way you might see "download" used as a noun is in an online advertisement that says, "Free Download." This phrase implies that clicking the download link will download a file (often a software program or installer) and use it for free. The noun "download" can also be used like the word "transfer" to describe the process of downloading data. For example, a program may display a status update that says, "Download in progress" or "Download complete." Updated September 18, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/download Copy

Downtime

Downtime Downtime is a period when a system is not available. It may apply to any computer or network, but is most commonly used in reference to servers. In particular, web server reliability is often measured in terms of downtime, where little to no downtime is ideal. There are several reasons why a server may experience downtime: Server reboot - Restarting a server may require a few minutes of downtime because the system must shut down, reboot, then restart the necessary processes in order to respond to incoming requests. Software restart - Restarting a process, such as Apache on a web server, may cause a few seconds of downtime while the process is restarting. Network disconnect - If a server is physically disconnected from a network, it will not be reachable by the systems on the network. Network outage - If any part of a network (including the Internet) is not functioning between the server and client, the client will not be able to communicate with the server. Traffic overload - If a server receives more traffic than it can handle, it will not be able to respond to all the requests. Users may experience downtime until the traffic decreases. This may be caused by a spike in traffic or a DDoS attack. Hardware failure - If an important hardware component, such as an HDD or SSD fails, it may cause the server to stop functioning. Software failure - If a process on a server, such as the httpd (HTTP) service stops running, it will cause the server to be unresponsive to requests until the process is restarted. Power outage - If the electrical power goes out and no backup power is available (for example, a generator or UPS), any affected systems will be offline until power is restored. Hacker attack - If a hacker gains control of a server, he or she may prevent access to the required services, causing the server to stop responding. In order to minimize downtime, server administrators must implement strong security measures and redundancy. Network security helps protect against malicious activity, such as unauthorized logins and DDoS attacks. Redundancy, such as RAID storage systems and backup power generators, help prevent downtime due to hardware failure. In some cases, multiple servers may be configured so that a secondary server can take over if the primary server fails. While server admins try to minimize downtime as much as possible, sometimes downtime is unavoidable. For example, when performing a server migration, several minutes or even a few hours of downtime may be necessary. This type of "planned downtime" is typically scheduled for early morning or weekend hours when traffic levels are lowest. Updated December 27, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/downtime Copy

DPI

DPI Stands for "Dots Per Inch." DPI is used to measure the resolution of an image both on screen and in print. As the name suggests, the DPI measures how many dots fit into a linear inch. Therefore, the higher the DPI, the more detail can be shown in an image. It should be noted that DPI is not dots per square inch. Since a 600 dpi printer can print 600 dots both horizontally and vertically per inch, it actually prints 360,000 (600 x 600) dots per square inch. Also, since most monitors have a native resolution of 72 or 96 pixels per inch, they cannot display a 300 dpi image in actual size. Instead, when viewed at 100%, the image will look much larger than the print version because the pixels on the screen take up more space than the dots on the paper. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dpi Copy

Drag

Drag You can use your mouse to drag icons and other objects on your computer screen. Dragging icons from your desktop or an open window to another folder will move the objects to the new folder. You can also drag icons to the Trash (Mac) or the Recycle Bin (Windows) if you want to delete them. Some word processing programs allow you to select text and drag the selected text to another place in the document. To select the text, you may have to "drag" the mouse over the text you want to select. Dragging is an important technique for using today's graphical user interfaces (GUIs). In fact, there are many other things you can drag besides icons. For example, you can drag the top of windows to reposition them, you can drag the scroll bar in open documents or Web pages to scroll through them, and you can drag messages to different folders in your mail program. Other programs, such as video games and image-editing programs use dragging to reposition items on the screen. To drag an item, first move the cursor over the item you want to drag. Then click and hold down the left mouse button to "grab" the item. Move the mouse to position the item where you want it. Let go of the mouse button once you have moved the item to "release" it. This technique is known as a "drag and drop." Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/drag Copy

Drag and Drop

Drag and Drop Drag and drop (also "drag-and-drop") is a common action performed within a graphical user interface. It involves moving the cursor over an object, selecting it, and moving it to a new location. If you are using a mouse, you can drag and drop an object by clicking the mouse button to select an object, then moving the mouse while keeping the mouse button pushed down. This is called "dragging" the object. Once you have moved the object where you want to place it, you can lift up the mouse button to "drop" the object in the new location. If you are using a touchscreen device, you can select an item by simply touching it with your finger. (Some interfaces may require you to hold your finger on the object for a second or two to select it.) Then you drag the item by moving your finger across the screen to the location where you want to place it. To drop the object, simply lift your finger off the screen. Drag and drop can be used for multiple purposes. For example, you can drag and drop an icon on the desktop to move it to a folder. You can drag and drop an open window by clicking the title bar and moving it to a new location. Some programs allow you to open files by dragging and dropping file icons directly onto the application icon. Many programs allow you to customize the workspace by dragging and dropping interface elements in different locations on the screen. Computer games, like chess, typically allow you to move objects in the game using a drag and drop action. NOTE: Since drag and drop is a simple and intuitive way to work with objects, software programs often promote "drag and drop editing" capability, which implies the software is easy to use. Updated September 13, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/drag_and_drop Copy

DRAM

DRAM Stands for "Dynamic Random Access Memory." DRAM is a type of volatile memory used as the main system RAM in most computers, smartphones, and tablets. It is much less expensive to manufacture than Static RAM (SRAM), and can store data more densely by using memory cells containing tiny electrical capacitors. However, since these capacitors leak their electrical charge over time, a DRAM chip must constantly refresh its contents to maintain each capacitor's charge. Each memory cell in a DRAM chip consists of a capacitor that stores an electrical charge and a transistor that controls the power flow into the capacitor. Digital data is binary and consists of groups of bits with two possible values (0 and 1). If a memory cell's capacitor carries an electric charge, its bit has a value of 1; if it does not, its bit has a value of 0. To maintain each cell's contents, a memory refresh circuit reads the electrical charge from each cell, amplifies it, and then writes it back to the cell in a process that repeats many times each second. A single DRAM memory cell can hold its charge for up to 64 milliseconds, and since each DRAM chip contains billions of cells, it is constantly refreshing large groups of cells at a time. A Kingston RAM module using DDR4 SDRAM, a form of DRAM memory Like other computer components, DRAM technology is constantly seeing improvements. Early asynchronous DRAM was phased out in favor of faster, more efficient synchronous DRAM (SDRAM). Multiple generations of DDR SDRAM have improved data transfer rates even further. DRAM is used as the primary system RAM in computers and as video RAM on graphics cards, but is also found in other types of electronic devices like networking equipment, digital cameras, printers, and gaming consoles. Updated November 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dram Copy

Drive

Drive A drive is a computer component used to store data. It may be a static storage device or may use removable media. All drives store nonvolatile data, meaning the data is not erased when the power is turned off. Over the past several decades, drives have evolved along with other computer technologies. Below is a list of different types of computer drives. 5.25 inch floppy drive - uses flexible removable media drive, stores up to 800 KB per floppy disk, popular in the 1980s 3.5 inch floppy drive - uses more rigid removable media, stores up to 1.44 MB per disk, popular in the 1990s Optical drive - uses removable optical media such as CDs (800 MB), DVDs (4.7 - 17 GB), and Blu-ray discs (25-50 GB), available in both read-only and writable models, popular in the 2000s Flash drive - a small, highly portable storage device that uses flash memory and connects directly to a USB port HDD (hard disk drive) - the most common internal storage device used by computers over the past several decades, can store several terabytes (TB) of data SSD (solid state drive) - serves the same purpose as a hard drive but contains no moving parts; uses flash memory and provides faster performance than a hard drive While there are many different types of drives, they are all considered secondary memory since they are not accessed directly by the CPU. Instead, when a computer reads data from a drive, the data first gets sent to the RAM so that it can be accessed more quickly. Even the fastest drives, likes SSDs, have much slower read/write speeds than RAM. Why are computer drives called "drives?" While an official answer remains elusive, a compelling reason is that early drives required a rotating device that would spin or "drive" the disk. While some modern drives have no moving parts, the legacy term "drive" seems to have stuck. Updated September 16, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/drive Copy

Drive-By Download

Drive-By Download A drive-by download is a download that happens automatically when you visit a webpage. The download starts without you initiating it and may take place in the background without any notification. Drive-by downloads can occur on both legitimate and malicious websites. For example, if a hacker gains access to a trusted website, he can install code on webpages that will initiate automatic downloads on visitors' computers. Malicious websites, such as those used in phishing and pharming activities, may intentionally download malware on users computers. There are multiple ways a webmaster can implement drive-by downloads in a webpage. One method is to insert JavaScript code that automatically opens a downloadable file once the page has loaded. Another method involves using an iframe that references another URL, which initiates the download. A less common method is to use a browser plug-in or extension that downloads files automatically. In rare cases, online advertisers can even insert code in display ads that initiate downloads on users computers. Most ad networks now prevent this type of behavior. While drive-by downloads happen automatically, it is rare that the an executable file will run without your permission. This is because most browsers notify you when a file has been downloaded and will not open downloaded files automatically. Therefore, you can prevent damage from drive-by downloads by simply not opening unknown files downloaded by your web browser. Updated August 10, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/drive-by_download Copy

Driver

Driver A driver, or device driver, is a software program that enables a specific hardware device to work with a computer's operating system. Drivers may be required for internal components, such as video cards and optical media drives, as well as external peripherals, such as printers and monitors. Most modern hardware is "plug and play," meaning the devices will work without requiring driver installation. However, even if a hardware device is recognized by the operating system, installing the correct drivers may provide additional options and functionality for the device. For example, most mice work automatically when they are connected to a PC. However, installing the appropriate mouse driver may allow you to customize the function of each button and adjust the mouse sensitivity. Some keyboard drivers allow you to assign functions to specific keys, such as controlling the volume or opening specific applications. There are several different ways to install drivers. For some devices, such as printers, the operating system may automatically find and install the correct drivers when the device is connected. In other cases, drivers can be installed from a CD or DVD provided with the hardware. If no disc is available, you can typically download the latest drivers from the manufacturer's website from either the "Support" or "Downloads" section. Most downloadable drivers include an installer that automatically installs the necessary files when you open the downloaded file. File extensions: .DRV, .DLL, .SYS, .KEXT Updated April 4, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/driver Copy

DRM

DRM Stands for "Digital Rights Management." DRM is a collection of systems used to protect copyrights on digital media. DRM tools limit the playback and use of protected media — music, video, and eBooks — to only people and devices with a valid license. These tools may also limit unauthorized copying and distribution of copyrighted digital media. DRM is used to protect both downloaded media files and streaming content. Most forms of DRM lock media files by encrypting files and providing decryption keys to licensed users. Some types of DRM require an active Internet connection to check for a license, but others only check for a license periodically or authenticate offline. Movies offered for rental instead of purchase may be time-restricted and expire after a certain amount of time. Other media may only be available to stream in a specific country or region, determined by a device's IP address or GPS location. Digital Rights Management is important to content creators, publishers, and distributors to make sure that they receive the appropriate revenue for the content. Some forms of DRM have been controversial due to buggy implementation or overly restrictive rules placed on the customer. However, when DRM works without being overly burdensome, it can reduce piracy by lowering the demand for piracy. For example, the iTunes Music Store used a form of DRM when it launched that allowed people to easily buy music and listen to it on all their devices in a way that was easier than using P2P software to download it illegally. NOTE: Commercial software can also use DRM to ensure that only licensed users can run it, typically by requiring an activation key or linking it to an online user account. Updated March 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/drm Copy

Drop Down Menu

Drop Down Menu A drop down menu is horizontal list of options that each contain a vertical menu. When you roll over or click one of the primary options in a drop down menu, a list of choices will "drop down" below the main menu. The most common type of drop down menu is a menu bar. On Windows systems, the menu bar is typically located at the top of each open window. On Macintosh systems, it is fixed at the top of the screen. When you click one of the options in the menu bar, such as "File", a list of options will appear below the menu, such as New, Open, Close, and Save. You can click any of these options to select it. Drop down menus are also commonly used for website navigation. Many websites use drop down menus to provide users with direct links to more pages than standard navigation bars allow. For example, a news website may list several news categories in the main menu, such as Politics, Business, Sports, and Entertainment. Each of these menu options may include several subcategories. For example, the Sports menu may contain options such Baseball, Basketball, Football, Hockey, and Soccer. By selecting a specific sport within the Sports menu, you can navigate to the section you want with one click, instead of having to visit the main Sports page first. Website drop down menus are typically created using DHTML (dynamic HTML), which may include a combination of HTML, CSS, and JavaScript code. They can also be written as Flash applications. While software program menus typically require you to click on the main menu to reveal the drop down options, website drop down menus often appear when you simply move the cursor over the main menu. Clicking a main menu option at the top of a website drop down menu may open the main topic page (like the Sports page in the example above). Regardless of how intuitive you find drop down menus to be, they can be a useful navigation tool once you get used to them. Updated February 15, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dropdownmenu Copy

DSL

DSL Stands for "Digital Subscriber Line." DSL is a communications medium used to transfer digital signals over standard telephone lines. Along with cable Internet, DSL is one of the most popular ways ISPs provide broadband Internet access. When you make a telephone call using a landline, the voice signal is transmitted using low frequencies from 0 Hz to 4 kHz. This range, called the "voiceband," only uses a small part of the frequency range supported by copper phone lines. Therefore, DSL makes use of the higher frequencies to transmit digital signals, in the range of 25 kHz to 1.5 MHz. While these frequencies are higher than the highest audible frequency (20 kHz), they can still cause interference during phone conversations. Therefore, DSL filters or splitters are used to make sure the high frequencies do not interfere with phone calls. Symmetric DSL (SDSL) splits the upstream and downstream frequencies evenly, providing equal speeds for both sending and receiving data. However, since most users download more data than they upload, ISPs typically offer asymmetric DSL (ADSL) service. ADSL provides a wider frequency range for downstream transfers, which offers several times faster downstream speeds. For example, an SDSL connection may provide 2 Mbps upstream and downstream, while an ASDL connection may offer 20 Mbps downstream and 1.5 Mbps upstream. In order to access the Internet using DSL, you must connect to a DSL Internet service provider (ISP). The ISP will provide you with a DSL modem, which you can connect to either a router or a computer. Some DSL modems now have built-in wireless routers, which allows you to connect to your DSL modem via Wi-Fi. A DSL kit may also include a splitter and filters that you can connect to landline phones. NOTE: Since DSL signals have a limited range, you must live within a specific distance of an ISP in order to be eligible for DSL Internet service. While most urban locations now have access to DSL, it is not available in many rural areas. Updated September 25, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dsl Copy

DSLAM

DSLAM Stands for "Digital Subscriber Line Access Multiplexer." A DSLAM is a networking device used by Internet Service Providers (ISPs) to route their subscribers' DSL connections to the Internet. It combines, or "multiplexes," separate connections from multiple subscribers into one aggregate connection. DSL modems in a single neighborhood (or other local loop) communicate over individual telephone lines to the local DSLAM. The DSLAM merges their traffic and routes it over a higher-bandwidth Internet backbone connection. It also prioritizes traffic and limits the bandwidth to certain DSL connections when necessary. DSLAMs are usually located in an ISP's telephone exchange office, although ISPs may also install them on-premises where there are a large number of DSL connections — for example, an office complex, hotel, or apartment building. Each DSLAM can handle a certain number of subscribers, so ISPs use multiple DSLAMs deployed and configured to route traffic as efficiently as possible. Updated December 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dslam Copy

DTD

DTD Stands for "Document Type Definition." A DTD defines the tags and attributes used in an XML or HTML document. Any elements defined in a DTD can be used in these documents, along with the predefined tags and attributes that are part of each markup language. The following is an example of a DTD used for defining an automobile: <!DOCTYPE automobile [                             ]> The above DTD first defines the header of the item as "Car Details." Then it provides elements to define the make and model of the automobile. The "#PCDATA" data type means it can be any text value). The "ATTLIST" tag on the next line provides options for a specific element. In this case, it states that the model can have either two or four doors. The DTD then provides elements for the year and engine type of the car, followed by a choice of either a manual or automatic transmission for the engine. The above example is a basic DTD that only uses a few data types. Document type definitions used for large XML databases can be thousands of lines long and can include many other data types. Fortunately, DTDs can be easily modified in a text editor whenever changes need to be made. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dtd Copy

Dual Boot

Dual Boot A dual boot system is a computer that can boot into two different operating systems. While most computers automatically load a specific operating system (OS) at startup, a dual boot system allows you to choose what OS you would like to load. For example, a dual boot Windows system may provide an option to load either Windows 7 or Windows 8 at startup. Linux and Mac OS X users can install Windows to create a Linux/Windows or Mac/Windows dual boot configuration. Dual boot systems are often used by computer enthusiasts, who prefer different operating systems for different tasks or want to run OS-specific applications. Software developers also use dual boot systems to test their software on multiple operating systems. A single dual boot system is more efficient than buying and setting up two separate computers. In order to create a dual boot system, you first need to install a custom boot manager or "boot loader." This is a small program that loads near the beginning of the boot sequence before the OS loads. Common PC boot loaders include LILO and GRUB, which support Linux and Windows. Mac users can install Apple's Boot Camp utility. In most cases, you will need to install each operating system on a separate partitions. Linux installations may require multiple partitions just for the Linux OS. While the disk partitioning can be done manually, it is typically accomplished using a dual boot utility like EasyBCD (Windows) or Boot Camp (Mac OS X). While many users still create dual boot machines, virtualization has become a more common way to run multiple operating systems on a single computer. Unlike dual booting, virtualization allows you to run multiple operating systems at the same time and switch between them without restarting. However, booting into a specific OS typically yields the best performance. NOTE: Dual boot may also be called "multi boot," which refers to a computer than can load two or more different operating systems. Updated May 9, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dual_boot Copy

Dual Processor

Dual Processor Dual processor refers to a computer with two separate processors. The processors work in tandem to process data using a technique called multiprocessing. Instructions are split between the two processors (or CPUs), allowing the computer to perform faster than a similar machine with only one processor. In theory, two CPUs can process twice as much data per second than a single CPU. However, because the two processors share resources, such as L2 and L3 caches, busses, and system memory, there are bottlenecks that slow down the overall performance. Also, programs must be written to take advantage of multiprocessing, meaning the performance of an application on a dual processor machine is dependent on how the application is written. As a result, dual processor machines are noticeably faster than single processor machines, but rarely twice as fast. Dual Processor vs Dual-Core Dual processor is similar to dual-core, but different. A dual processor computer has two separate CPUs, which are physically separated on the motherboard. The two processors may share resources (like the CPU bus and cache), but are physically separate. In a dual-core system, the two processors are combined into a single chip that may look like one processor. Since the processors are combined into one entity, they may collectively be called a single dual-core CPU. While dual processor and dual-core have two different meanings, they are not mutually exclusive. Some systems have two dual-core CPUs, totally four processing cores. Desktop computers may have four, six, or eight separate processors. High-end scientific computing machines can go way beyond dual processor configurations and may include dozens of processors. Updated January 19, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dual_processor Copy

Dual-Core

Dual-Core A dual-core processor is a CPU with two processors or "execution cores" in the same integrated circuit. Each processor has its own cache and controller, which enables it to function as efficiently as a single processor. However, because the two processors are linked together, they can perform operations up to twice as fast as a single processor can. The Intel Core Duo, the AMD X2, and the dual-core PowerPC G5 are all examples of CPUs that use dual-core technologies. These CPUs each combine two processor cores on a single silicon chip. This is different than a "dual processor" configuration, in which two physically separate CPUs work together. However, some high-end machines, such as the PowerPC G5 Quad, use two separate dual-core processors together, providing up to four times the performance of a single processor. While a dual-core system has twice the processing power of a single-processor machine, it does not always perform twice as fast. This is because the software running on the machine may not be able to take full advantage or both processors. Some operating systems and programs are optimized for multiprocessing, while others are not. Though programs that have been optimized for multiple processors will run especially fast on dual-core systems, most programs will see at least some benefit from multiple processors as well. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dualcore Copy

DUT

DUT Stands for "Device Under Test" and is pronounced "D-U-T." A DUT is a product or component that is undergoing testing. The testing phase often takes place before a product is sold or after it is repaired. A DUT may also be a prototype that is not meant to be sold, but is tested until it brakes or fails. Quality control engineers often test electronic devices before they are packaged and sold. They may test random devices or every device that is manufactured. Some tests are performed manually, while others may be conducted using robotics or scientific measuring tools. Below are examples of DUT tests that might be run on a newly manufactured product: Does the device turn on? Does the screen work as expected? Does the device accept input? Is the device's output correct? Is the size and shape within the acceptable range? DUTs are also used for reliability testing. These devices may be prototypes, random samples from a batch of products, or devices that have already been in use. The goal of reliability testing is to gauge the typical lifespan of a product and what operating conditions are acceptable. Below are examples of DUT reliability tests: How long does the device run before encountering an error? Which component in the device is the first to fail? What is the safe temperature range in which to operate the device? Do other devices interfere with the operation of the device? How durable is the device when bumped or dropped? DUTs used for reliability testing are pushed to their limits and often their breaking point. These devices are discarded or recycled after being tested. Repaired and refurbished products undergo less rigorous testing. Refurbished DUTs are typically sold or placed back in use if they pass the testing phase. Updated November 29, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dut Copy

DV

DV Stands for "Digital Video." DV is an early digital videotape format and is the first digital video format made widely available for amateur use. Unlike analog video tape, which encodes audio and video as an analog waveform, DV stores encoded audio and video signals digitally on magnetic tape in a cassette. DV cassette tapes are available in several sizes — the smallest, MiniDV, was widely used on consumer video cameras until cameras with internal flash storage became available. DV video is typically standard definition (720x480 pixels), although a high-definition version, HDV, supports 720p and 1080i video. The standard-definition DV codec lightly compresses video using an intra-frame method, processing video one frame at a time to make video editing easier (unlike inter-frame encoding, which compresses video by comparing the differences between consecutive video frames). The DV codec is not limited to recording video to magnetic tape; it can also store video as discrete files on hard drives, memory cards, or a camera's built-in flash storage. Since DV stores audio and video digitally, camcorders can easily transfer video to a computer for playback and editing. DV camcorders export video from the cassette to a computer using a Firewire (IEEE 1394) cable, which can then be edited directly in video editing software without any additional analog-to-digital conversion. Updated November 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dv Copy

DVD

DVD Stands for "Digital Versatile Disc." A DVD is a type of optical media used for storing digital data. It is the same size as a CD but has a larger storage capacity. Some DVDs are formatted specifically for video playback, while others may contain different types of data, such as software programs and computer files. The original "DVD-Video" format was standardized in 1995 by a consortium of electronics companies, including Sony, Panasonic, Toshiba, and Philips. It provided a number of improvements over analog VHS tapes, including higher quality video, widescreen aspect ratios, custom menus, and chapter markers, which allow you to jump to different sections within a video. DVDs can also be watched repeatedly without reducing the quality of the video and of course, they don't need to be rewound. A standard video DVD can store 4.7 GB of data, which is enough to hold over 2 hours of video in 720p resolution, using MPEG-2 compression. DVDs are also used to distribute software programs. Since some applications and other software (such as clip art collections) are too large to fit on a single 700 MB CD, DVDs provide a way to distribute large programs on a single disc. Writable DVDs also provide a way to store a large number of files and back up data. The writable DVD formats include DVD-R, DVD+R, DVD-RW, DVD+RW, and DVD-RAM. While the different writable DVD formats caused a lot of confusion and incompatibility issues in the early 2000s, most DVD drives now support all formats besides DVD-RAM. A standard DVD can hold 4.7 GB of data, but variations of the original DVD format have greater capacities. For example, a dual-layer DVD (which has two layers of data on a single side of the disc) can store 8.5 GB of data. A dual-sided DVD can store 9.4 GB of data (4.7 x 2). A dual-layer, dual-sided DVD can store 17.1 GB of data. The larger capacity formats are not supported by most standalone DVD players, but they can be used with many computer-based DVD drives. Updated October 20, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dvd Copy

DVD-R

DVD-R Stands for "Digital Versatile Disc Recordable" and is pronounced "DVD dash R." DVD-R is one of two standards for recordable DVDs, along with DVD+R. It is a write-once format, meaning that data is written to the disc permanently and cannot be erased or modified. Of the two standards, DVD-R is compatible with more players than DVD+R. Single-layer DVD-R discs have a capacity of 4.7 GB of data or video; double-layer discs can store 8.5 GB. Only a compatible DVD-R or Hybrid (DVD±R) optical drive can write to a DVD-R disc. The rewritable version of DVD-R is called DVD-RW. DVD-R and DVD+R discs are both writable optical media formats. A DVD-R drive uses a laser to burn a series of spots onto a layer of heat-sensitive dye on a blank disc. Like the pits on a professionally-pressed DVD, these spots encode digital data onto the disc that a laser in the drive reads and decodes. Both recordable DVD standards have microscopic grooves that run from the start of the disc to the end that guide the laser as it burns data. This groove has a slight but consistent wobble that ensures the disc is spinning at the correct speed, wobbling at a frequency of 140.6 kHz on a DVD-R (compared to a much faster 817.4 kHz on a DVD+R). A stack of blank DVD-R discs Even after a DVD-R has been written to once, if there is still space on the disc, a drive can burn additional sessions to add more data. However, the DVD-R format uses a less precise tracking method when writing to the disc than DVD+R. Starting another session on a DVD-R requires a larger buffer between the two sessions, wasting a small amount of disc space. Updated October 20, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dvd-r Copy

DVD-RAM

DVD-RAM Stands for "Digital Versatile Disc Random Access Memory." DVD-RAMs are writable DVDs that can be erased and rewritten like DVD-RW and DVD+RW discs. Unlike the other two writable DVD formats, DVD-RAM discs support advanced error correction and defect management. While these features slow down the maximum data transfer rate for DVD-RAM discs, it also makes the discs more reliable. Early DVD-RAM discs required an enclosing cartridge, which meant they would not fit in most DVD players or DVD-ROM drives. Therefore, you would need a DVD-RAM drive to use DVD-RAM discs, as well as burn them. Newer DVD-RAM discs, however, can be used without a cartridge. These discs can be played in any DVD player that supports the DVD-RAM format. While the first DVD-RAM media could only hold 2.6GB on a single-sided disc, newer double-sided discs can store up to 9.4GB. Updated March 17, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dvdram Copy

DVD-RW

DVD-RW Stands for "Digital Versatile Disk Rewritable" and is pronounced "DVD dash RW." DVD-RW is one of two standards for rewritable DVDs, along with DVD+RW. A rewritable DVD-RW disc can be written to, erased, and then written to again up to 1,000 times. DVD-RW discs have a capacity of 4.7 GB of data or video. Rewritable optical discs like DVD-RW are not as well-supported by DVD players as write-once formats (like DVD-R) or commercially-pressed discs but are still readable by most players. Only a compatible DVD-RW or Hybrid (DVD±RW) optical drive can write to a DVD-RW disc. DVD-RW and DVD+RW discs are both rewritable optical media formats. Once written, a disc's contents are read-only until the disc is erased and rewritten. Since you cannot edit files directly on a disc, they're a poor choice for storing files that need to be frequently modified. Instead, the format is better suited for making frequent data backups. An optical disc drive that can write discs has three lasers—a low-power laser that reads data, a medium-power laser that erases rewritable discs, and a high-power laser that burns data to a disc. A DVD-RW drive burns a series of spots onto a reflective layer of a phase change metal alloy on the underside of the disc. At first, this metal alloy exists in a crystalline form that lets light through. A high-power burning laser heats the alloy past its melting point, which then turns opaque as it cools. These opaque spots encode digital data that the reading laser in the drive reads and decodes. When it's time to erase a disc, the drive uses the medium-powered erase laser to heat the alloy layer to its crystallization point. Updated October 20, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dvd-rw Copy

DVD+R

DVD+R Stands for "Digital Versatile Disc Recordable" and is pronounced "DVD plus R." DVD+R is one of two standards for recordable DVDs along with DVD-R. Data written to a DVD+R is permanent and cannot be erased or modified once on the disc. DVD+R discs are slightly less compatible with DVD players than the competing DVD-R format, but most DVD players and drives can read them. Single-layer DVD+R discs have a capacity of 4.7 GB of data or video; double-layer discs increase that to 8.5 GB. Only a compatible DVD+R or Hybrid (DVD±R) drive can write to a DVD+R disc. The rewritable version of DVD+R is called DVD+RW. DVD+R and DVD-R discs are both writable optical media formats that operate on the same principle, using a laser to burn a series of spots onto a layer of heat-sensitive dye on a blank disc. These spots serve the same function as the pits on a professionally-pressed DVD, encoding digital data that the laser in the drive reads and decodes. Both types have microscopic grooves that run from beginning to end to guide the laser and keep it on track. This groove has a slight but consistent wobble that ensures the disc is spinning at the correct speed. It wobbles at a much faster frequency on a DVD+R disc, 817.4 kHz, compared to 140.6 kHz on a DVD-R. The other significant difference between the two standards is that DVD+R drives use a more precise method to track where that data is written on the disc to make it easier to add information over multiple sessions. Starting a second writing session on a DVD-R requires a larger buffer between the two sessions, wasting a small amount of disc space. DVD+R discs are also slightly less expensive to manufacture. Competing Formats DVD-R and DVD+R are two similar standards for writing to a DVD, developed at roughly the same time by two competing industry groups. The DVD-R format was developed first by Pioneer in 1997, with the support of the DVD Forum licensing group. Sony and Philips formed the DVD+RW Alliance and developed DVD+R in 2002 as a competing standard using some of the same technology the two companies had used in the CD-R standard years earlier. The DVD Forum controlled all official DVD licensing and logos, and did not allow DVD+R the status of an official DVD product until several years later. A DVD-R disc (left) and a DVD+R disc (right), showing different industry group logos The technology used by both standards was similar enough that disc drives that could write one format could technically write to the other. However, drive manufacturers had to pay a licensing fee to whichever group whose standard they supported. Drives were initially limited to one standard or the other, but eventually, manufacturers began licensing both technologies and producing "Hybrid" or "DVD±R" drives that supported both formats. Updated October 12, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dvdr Copy

DVD+RW

DVD+RW Stands for "Digital Versatile Disk Rewritable" and is pronounced "DVD plus RW." DVD+RW is one of two standards for rewritable DVDs, along with DVD-RW. DVD+RW discs can be written to, erased, and rewritten up to 1,000 times and can store up to 4.7 GB of data or video. Since the rewritable surface is less reflective than write-once formats like DVD+R or commercially-pressed discs, DVD+RW discs are not as well-supported by DVD players. Only a compatible DVD+RW or Hybrid (DVD±RW) drive can write to a DVD+RW disc. DVD+RW and DVD-RW discs are erasable and rewritable optical media formats. Once written, a disc's contents are read-only and cannot be edited; in order to change the data on a disc, it must be erased and then rewritten. This process makes the format a poor choice for storing files that need to be frequently modified. Instead, the rewritable discs are better suited for making frequent data backups. An optical disc drive that can write discs has three lasers—a low-power laser that reads data, a medium-power laser that erases rewritable discs, and a high-power laser that burns data to a disc. A DVD+RW drive burns a series of spots onto a reflective layer of a phase change metal alloy on the underside of the disc. At first, this metal alloy exists in a crystalline form that lets light through. A high-power burning laser heats the alloy past its melting point, which then turns opaque as it cools. These opaque spots encode digital data that the reading laser in the drive reads and decodes. When it's time to erase a disc, the drive uses the medium-powered erase laser to heat the alloy layer to its crystallization point. NOTE: DVD+RW and DVD-RW were developed as competing disc formats by two separate industry groups. Since the two are technically very similar, most rewritable optical disc drives are Hybrid (DVD±RW) drives that can write to both formats. Updated October 25, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dvdrw Copy

DVI

DVI Stands for "Digital Visual Interface." DVI is a video interface used to connect monitors, projectors, and other display devices to a computer. It supports both analog and digital video signals over a single connector, although some variant connectors only supply one or the other. DVI was a common display interface in the 2000s and 2010s. It replaced the older analog-only VGA interface and was itself replaced in the mid-2010s by HDMI and DisplayPort. The DVI interface was the first mainstream video interface that could transmit a purely digital video signal. Previous interfaces, like VGA, first converted the digital video signal to an analog signal and sent that signal over a cable. However, analog video signals are susceptible to electrical interference that could affect image quality. By supporting both analog and digital video signals, the DVI interface allowed computer users to use either an analog CRT monitor or a digital LCD monitor through the same port on their graphics card. The DVI interface used several types of connectors that used different pin layouts to carry video signals: DVI-A carries only an analog video signal, which can be converted to a VGA connection using a passive adapter. DVI-D transmits only a digital video signal. There are two subtypes of DVI-D connectors, single-link and dual-link, with dual-link offering more bandwidth to support higher resolutions and refresh rates. DVI-I carries both an analog and a digital video signal, allowing a computer to connect to either type of display using the same connector. Like DVI-D, DVI-I is available in single-link and dual-link versions, although only the digital signal benefits from the additional bandwidth. Like DVI-A, its video signal can be converted to a VGA signal using a passive adapter. By the 2010s, analog CRT monitors were no longer widely used, and the ability to transmit an analog video signal was unnecessary. Newer interfaces like HDMI and DisplayPort provide purely digital video signals while also including audio. DisplayPort can also carry USB data to allow a monitor to include a built-in USB hub. HDMI and DisplayPort are capable of higher resolutions and refresh rates than DVI, leading to DVI becoming an obsolete interface. Updated September 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dvi Copy

Dvorak Keyboard

Dvorak Keyboard The Dvorak keyboard is a keyboard layout named after its designer, Dr. August Dvorak. He designed the keyboard as alternative to the standard QWERTY keyboard layout, with the goal of improving typing ergonomics. Dvorak developed the new keyboard layout after studying common typing patterns. He determined the QWERTY layout, which was designed for telegraph operators and early typewriters, was inefficient. It required awkward motions, didn't use the home row (ASDF) enough, and required many common key patterns to be typed with one hand. To improve typing efficiency, Dvorak designed his keyboard layout to alternate keystrokes between left and right hands. He also placed the most common letters in the home row. For example, since nearly all words have vowels, the home row on the Dvorak keyboard begins with the letters AOEUI. The vowel keys are placed next to each other since vowels often alternate with consonants. Dvorak patented his keyboard layout in 1936, claiming layout offered the faster typing speeds, greater accuracy, and less fatigue than the QWERTY keyboard. Despite these benefits, the Dvorak keyboard has never achieved the popularity of the QWERTY layout. Most people still learn to type on a QWERTY keyboard and simply do not want to relearn a new keyboard layout. Therefore, nearly all desktop computers and laptops sold in Western countries come with QWERTY keyboards. NOTE: The original version of the Dvorak keyboard is also known as the Dvorak Simplified Keyboard (DSK). In 1982, ANSI standardized a slight variation of the Dvorak keyboard layout, called the American Simplified Keyboard (ASK). Updated November 7, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dvorak_keyboard Copy

DVR

DVR Stands for "Digital Video Recorder." A DVR is a device that can digitally record, save, and play back television programs. A DVR can also pause live TV by constantly recording a buffer of live programming as it airs. Later, the user can choose to fast forward (often during commercials) to return to live television. Most satellite and cable TV companies offer DVR capabilities in one of two forms — local DVR using hard drives or solid-state drives built into tuner boxes, or cloud DVR that records and stores video on a remote server. Streaming TV services like YouTubeTV only offer cloud DVR since they don't provide any hardware boxes. You can schedule DVR recordings using the program guide — simply browse or search through listings and select programs you'd like to record. Once the shows are recorded, they'll appear in a list you can access at any time to watch on demand. A TiVo Edge DVR box TiVo produced the first dedicated DVR devices that could record and pause live TV. TiVo devices perform the same functions as a DVR offered directly from a cable or satellite provider but are available as a one-time purchase instead of a monthly rental. Some TiVo devices can function as cable boxes using a cable card provided by the cable company, while others can record over-the-air TV using an antenna (without a cable or satellite subscription at all). While TiVo users who buy their own TiVo boxes may avoid the monthly rental fees from their cable company, they must still pay a monthly fee for program listings if they want to schedule recordings. Updated March 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/dvr Copy

Dynamic Website

Dynamic Website Dynamic websites contain Web pages that are generated in real-time. These pages include Web scripting code, such as PHP or ASP. When a dynamic page is accessed, the code within the page is parsed on the Web server and the resulting HTML is sent to the client's Web browser. Most large websites are dynamic since they are easier to maintain than static websites. This is because static pages each contain unique content, meaning they must be manually opened, edited, and published whenever a change is made. Dynamic pages, on the other hand, access information from a database. Therefore, to alter the content of a dynamic page, the webmaster may only need to update a database record. This is especially helpful for large sites that contain hundreds or thousands of pages. It also makes it possible for multiple users to update the content of a website without editing the layout of the pages. Dynamic websites that access information from a database are also called database-driven websites. Updated June 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/dynamicwebsite Copy

E-commerce

E-commerce E-commerce (or electronic commerce) refers to commercial activity conducted over the Internet. An e-commerce transaction occurs whenever someone buys goods or services from a business' website. An e-commerce transaction can take lots of forms, such as a person buying a piece of software that they can download, ordering a pair of shoes to be delivered, or hiring an artist to draw a picture. E-commerce has several advantages over in-person commerce but also some drawbacks. Buying a digital good — like software, music, or a movie download — can happen instantly, but buying a physical product requires it to be shipped and delivered. An e-commerce retailer can have a large selection of products available to ship, but the customer does not have the opportunity to examine things in person. Advances in augmented reality (AR) are beginning to offer customers the chance to see what a product would look like in their homes. There are four main types of e-commerce based on who is buying and who is selling. Business to Consumer (B2C) is the most common form, where a person visits a website to buy something. A person buying a product or service from a business, once or on a subscription basis, is B2C e-commerce. Business to Business (B2B) is when one business purchases goods or services from another business. A business selling to a government or other organization, while not strictly B2B, is also considered B2B e-commerce. Consumer to Consumer (C2C) is when one person sells something to another person, facilitated by a website or other Internet service. Auction sites like eBay were the earliest forms of C2C e-commerce, and sites like Etsy now allow individuals offering products and services to find customers easily. Consumer-to-Business (C2B) e-commerce is where an individual sells a product (or, more often, a service) to a business. One common form of this is freelance contract work, like a photographer licensing an image to a stock photo agency or a developer creating a custom application for a business. Updated October 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ecommerce Copy

E-learning

E-learning E-learning or "electronic learning" is an umbrella term that describes education using electronic devices and digital media. It encompasses everything from traditional classrooms that incorporate basic technology to online universities. E-learning in a traditional setting may include educational films and PowerPoint presentations. These types of media can provide students with content that is more dynamic and engaging than textbooks and a whiteboard. Edutainment, or content that is designed to be educational and entertaining, may be used to keep students' attention while providing knowledge about a particular topic. A documentary film, for example, may be both engaging and informative. While some classrooms incorporate digital technology, others are designed around it. A Classroom Performance System (CPS), for example, provides a completely digital learning environment. It includes a projector for displaying videos and web content and a digital chalkboard for the instructor. Students can complete quizzes and tests using digital response pads rather than handing in papers. The paperless environment provides an efficient way for students to learn and ensures teachers always have the latest instructional materials. Online education is another common form of e-learning. Many colleges and universities now allow students to submit assignments and complete tests online. Some educational institutions are 100% online, meaning students never have to attend class inside a physical classroom. In order to maintain a sense of community, online universities often provide and even require students to participate in online discussions using Moodle or another virtual learning environment. Online classes are typically administered by an accredited professor who may give live or recorded lectures that students can watch online. The professor also grades students' assignments and is available to answer individual questions. In most cases, credits earned online are equivalent to those earned in a traditional classroom setting. Updated November 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/e-learning Copy

E-mail Bankruptcy

E-mail Bankruptcy In this day and age, most of us receive several e-mails a day. Depending on your job, you may even receive dozens of daily messages that are not spam. While it is hard enough to keep up with this plethora of e-mails received in a single day, if you fall behind a few days, it can be nearly impossible to catch up. After awhile, you may end up with hundreds of messages in your inbox that have not been replied to. If your become submerged underneath an endless pile of e-mail in your inbox, the only way out may be to declare e-mail bankruptcy. Similar to a financial bankruptcy, e-mail bankruptcy involves writing off the losses and starting over. The most tactful way of declaring e-mail bankruptcy is to paste all the e-mail addresses from the messages you have not responded to into a single message. Then send a message explaining that you have fallen too far behind on your e-mail and apologize for not responding. The quicker, but less considerate option is to simply delete all the old messages and start over like nothing ever happened. While it is best to avoid e-mail bankruptcy by keeping up with your e-mail, for some people it may be the only way to get current with their correspondence. If you are in a situation where you feel overwhelmed by the growing number of messages in your inbox, make sure you first reply to the most important messages. Then, as a last resort, declaring e-mail bankruptcy may give you the fresh start you need. Updated August 7, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/emailbankruptcy Copy

E-reader

E-reader An e-reader, or "e-book reader," is a portable hardware device designed for reading digital publications. These include e-books, electronic magazines, and digital versions of newspapers. Since textual data does not require a lot of storage space, most e-readers can store thousands of books and other publications. Just like an iPod can store an entire music library, a single e-reader can store a large collection of books. Dozens of different e-readers are available, but some of the most popular ones include the Amazon Kindle, the Barnes and Noble Nook, and the Sony Reader. These devices all support a wide range of eBook formats and can download content over a wireless network. Many e-readers have a monochrome display, often called "electronic paper," while others have a full-color backlit display. While the electronic paper displays do not provide color images, the screen appears more like a paper page from a book, and it can be easily viewed in bright sunlight. Tablets, such as the Apple iPad, the BlackBerry PlayBook, and the Amazon Kindle Fire are often considered e-readers, since they can be used for reading digital publications. However, it is more accurate to refer to these devices as tablets that can be used as e-readers since they are not designed primarily as digital readers. Tablets offer more capabilities than e-readers, but e-readers are often better suited for just reading e-books. Updated November 15, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ereader Copy

eBook

eBook eBook (or e-book) is short for "electronic book." It is a digital publication that can be read on a computer, e-reader, or other electronic device. eBooks are available in several different file formats. Some are open formats that can be read on multiple devices, while others are proprietary and can only be viewed on a specific device, such as an iPad or Kindle. Commercially available publications often include some kind of digital rights management (DRM) that prevent the content from being viewed on unauthorized devices. For example, many books available through Amazon's Kindle Store and Apple's iBookstore are copy-protected using DRM protection. While there are many types of eBook formats, all major ones support text, images, chapters, and page markers. Most formats also support user annotations, such as highlighted text, drawings, and notes. For example, the Sony Reader includes a handwriting feature that allows you to underline specific text on a page. The Amazon Kindle includes a highlighter pen used for highlighting text. Some e-readers allow you to share your annotations with others online and view what text other readers have highlighted or commented on. NOTE: An eBook may be a novel, magazine, newspaper, or other publication. However, the electronic versions of magazines and newspapers are often called "digital editions" to differentiate them from electronic books. View a comprehensive list of eBook formats. File extensions: .EPUB, .LIT, .AZW3, .IBOOKS Updated January 29, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ebook Copy

ECC

ECC Stands for "Error Correction Code." ECC is a method of verifying the integrity of data written to and read from system memory. Special RAM modules, called ECC RAM, use ECC to protect data from corruption and are often used in systems where reliability and uptime are critical. Workstations, servers, medical and scientific equipment, and aircraft control computers typically use ECC RAM. Most computer memory errors are single-bit errors caused by an individual bit's value flipping from 0 to 1 (or vice versa) due to some sort of electromagnetic interference. These errors often result in system instability or corrupted data — for example, changing a single character of text or the color value of a pixel. ECC memory can locate single-bit errors and fix them, flipping the affected bit back to its original value. When a system writes data to an ECC memory module, it does two things that help it track errors. First, it adds an extra bit of data to each word (or group of bits), called a parity bit, which keeps track of how many bits in a particular word have a value of 1. It also generates an error correction code using a built-in algorithm based on the positions of 0s and 1s in the word. If the number of 1s changes due to a single-bit error, the parity bit won't match. The ECC circuitry in the memory module refers back to the error correction code it generated to identify which bit flipped, then flips it back to its original state. ECC RAM modules include more chips than non-ECC RAM to store parity bits ECC memory modules are more complex due to the extra circuitry, which makes them more expensive than non-ECC memory. They require more memory chips than non-ECC memory modules in order to store the parity bits. ECC memory also performs slower than non-ECC memory, thanks to the extra steps required to generate error correction codes and check the parity bit. Because of the cost and performance hit, a typical desktop or laptop computer uses non-ECC RAM, leaving ECC RAM for high-uptime servers and workstations. Updated July 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ecc Copy

Edge Caching

Edge Caching Edge caching is a mechanism content delivery networks (CDNs) use to cache Internet content in different locations around the world. Examples include website data, cloud storage, and streaming media. By storing copies of files in multiple "edge" locations, a CDN can deliver content to users more quickly than a single server can. CDNs may have tens or hundreds of global data centers. Each data center contains edge servers that intelligently serve data to nearby users. In most cases, edge servers "pull" data from an origin server when users request content for the first time. Once an edge server pulls an image, video, or another object, it caches the file — typically for a few days or weeks. The CDN then serves subsequent requests from the edge server rather than the origin. Tiered Caching A CDN may automatically propagate newly pulled content to all servers or wait for each server to request the data. Automatic propagation reduces trips to the origin server but may result in unnecessary duplication of rarely-accessed files. Waiting for local requests is more efficient but increases trips to the origin server. Modern CDNs use tiered caching to balance the two methods. The first time a user accesses a file, the CDN caches it on the local edge server and several "primary data centers." It reduces unnecessary propagation of cached files and limits trips to the origin server. CDNs provide customizable cache settings, such as how frequently to check for updated files or when to let cached files expire. Most have a "purge" feature, which allows webmasters to remove old content from all edge servers at once. Purging files is useful when updating static assets, such as CSS documents and image files. Edge Caching Benefits Edge caching reduces latency, providing faster and more consistent delivery of Internet content to users around the world. For example, a user in Sydney, Australia, may experience a two-second delay when accessing a server in Houston, Texas. If the data is cached in Australia, the delay may be less than one-tenth of a second. While the primary purpose of edge caching is to improve content delivery speed, it also provides two other significant benefits: bandwidth reduction and redundancy. Edge caching reduces Internet bandwidth by shortening the distance data needs to travel to each user. Local edge servers reduce Internet congestion and limit traffic bottlenecks. Replicating data across multiple global data centers also provides redundancy. While CDNs typically pull data from an origin server, individual data centers can serve as backups if the origin server fails or becomes inaccessible. Updated April 23, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/edge_caching Copy

Edge Server

Edge Server An edge server is a computer located at the "edge" of the Internet, serving users in a specific area. CDNs and edge computing services use edge servers to provide Internet content and computing power as fast as possible. Edge servers are distributed around the world at locations called points of presence, or PoPs. By placing servers closer to users, latency is reduced, and Internet users can access data more quickly. For example, if someone in Tokyo accesses a website hosted in New York, the ping may be over half a second, causing a delay before the webpage loads. If the content is hosted on an edge server in Tokyo, the ping may be only a few milliseconds, allowing the webpage to load much faster. CDN Edge Servers Content delivery networks, or CDNs, use edge servers to cache content, such as websites and streaming media. They retrieve data from an origin server, then replicate it globally across the edge network. When a user in Brasil accesses a website with an origin server in England, the CDN may serve the content from an edge server in Rio de Janeiro. CDNs often host static assets, such as CSS, JavaScript, and image files. However, they can also cache and serve HTML. Pages with dynamic content must be regularly refreshed or "purged" so users do not receive outdated pages stored in the CDN cache. Edge Computing Servers Edge computing networks use edge servers to perform calculations in different locations worldwide. Handing computation tasks closer to the source of each request reduces roundtrip delays and provides faster response times. Examples include online ad targeting, video conferencing, and multiplayer gaming. While edge servers provide users faster access to Internet content, they also have a secondary benefit — reducing Internet traffic. Since edge servers deliver content and computing power from locations close to each user, they decrease the data travel and total bandwidth required for each request. Updated April 9, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/edge_server Copy

EDI

EDI Stands for "Electronic Data Interchange." EDI is a standardized method for transferring data between business networks. An EDI system translates data from a company's internal system into a common language that can be understood by that company's trading partner's systems. Businesses use EDI to transfer documents like purchase orders, invoices, and inventory documents. By sending this data electronically, businesses save the time that would have been spent printing forms, mailing them, and entering data that they've received into their system. Two businesses can exchange EDI data in several different ways. They can connect their systems directly, using the Internet to securely send data from one system to another. This process can be cumbersome when dealing with a large number of trading partners, so many businesses use a Value Added Network (VAN), which acts as a virtual post office to facilitate data transfers. VANs route EDI data from one business to another using customer IDs, authenticate the messages and transmitting partner, and notify a business when new data is received. They can provide additional benefits to businesses, like full encryption, analytics, and auditing services. There are several standard EDI formats in use that specify how documents are formatted and transmitted, like EDIFACT (International), X12 (North America), and TRADACOMS (UK). Each standard has a set of forms and data types that all systems using that standard can process. EDI forms are categorized and specialized to certain tasks, like credit/debit adjustments, shipment and billing notices, and return authorizations. Updated November 8, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/edi Copy

Edtech

Edtech Edtech is short for "educational technology." It refers to any technology used for educational purposes. Other industry-specific terms include biotech, medtech, and fintech. Edtech includes hardware, such as Chromebooks and tablets, and software, such as educational apps for PCs and mobile devices. It also includes collaboration tools such as Moodle, Canvas, and Google Classroom. From elementary schools to universities, educational institutions use edtech to provide modern teaching methods. Classroom performance systems, for example, combine several educational technologies to create an interactive learning environment. Instead of taking quizzes with paper and pencil, students can answer questions in real-time using a laptop or tablet. Edutainment tools, such as videos and games, help keep students engaged. Modern Edtech Trends In 2020, remote learning replaced in-person education as the pandemic forced students and teachers to stay at home. Virtual learning environments, or VLEs, quickly replaced physical classrooms so students could learn online. While teachers and students eventually returned to schools, 2020 marked a leap forward in remote learning technologies. Many of these technologies, such as Zoom meetings and online classroom portals, continue to be used today. NOTE: Edtech may also be written "EdTech" or "EduTech." However, "edtech" has become increasingly common because it is consistent with similar terms, such as "fintech" and "biotech." Updated April 27, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/edtech Copy

Edutainment

Edutainment Edutainment combines the words "education" and "entertainment." It refers to any form of entertainment that is educational. The goal of edutainment is to make learning enjoyable and fun. Edutainment is found inside and outside of classrooms and exists across several types of media. Some are passive, while others are interactive. Examples of passive forms of edutainment include: Fictional books with educational themes Movies and TV shows with entertaining characters who teach viewers Music and songs that help people learn Fictional radio shows and podcasts designed to educate listeners Examples of interactive edutainment include: Video games designed to educate players Quizzes that provide goals and rewards as users progress through each level Software that allows users to compete against each other in learning exercises Educational websites with interactive elements, such as clickable images and animations Edutainment may be used to teach nearly any topic, including math, language, history, geography, and science. For example, teachers may use entertaining science videos to help students learn new concepts. Parents can use math or language software to help kids learn at home. By creating a fun learning environment, edutainment can make education easier and enjoyable. Updated December 4, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/edutainment Copy

eGPU

eGPU Stands for "External Graphics Processing Unit." An eGPU is a graphics processor that resides outside of a computer. It is attached via a high-speed connection, such as Thunderbolt cable, which provides sufficient bandwidth to process graphics in real-time outside the computer. eGPUs can be used by both desktop and laptop computers. However, they are more commonly connected to laptops since desktop PCs often have internal expansion slots for graphics cards. In either case, the purpose of an eGPU is to provide the connected machine with higher graphics processing performance. Historically, the graphics performance of a computer without expansion slots was limited by the speed of the built-in graphics card. The introduction of the high-speed Thunderbolt 2 and Thunderbolt 3 interfaces has made it possible to render graphics in real-time using an external GPU. An eGPU setup requires four components: Computer eGPU enclosure or adapter Video card (GPU) Cable While it is possible to use a rudimentary adapter to connect an external graphics card, eGPU enclosures are more commonly used because they protect the GPU and provide more reliable performance. An eGPU enclosure provides a connection for the video card (such as a PCI Express slot) and has an interface for connecting to the computer, such as a Thunderbolt port. Most eGPU enclosures support several different models of video cards, which means you can upgrade the card at any time. Similar to an internal GPU, the graphics processor used in an eGPU must be supported by the operating system in order to work. This means you may need to install drivers for the eGPU before using it. NOTE: Apple added operating system-level support for eGPUs in macOS High Sierra 10.13.4, which was released on March 29, 2018. Updated March 30, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/egpu Copy

EIDE

EIDE Stands for "Enhanced Integrated Drive Electronics." EIDE is an improved version of the IDE drive interface that provides faster data transfer rates than the original standard. While the original IDE drive controllers supported transfer rates of 8.3 Mbps, EIDE can transfer data up to 16.6 Mbps, which is twice as fast. The term EIDE can be a bit ambiguous, since it technically refers to an ATA standard known as ATA-2 or Fast ATA. Therefore, the terms EIDE, ATA-2, and Fast ATA may be used interchangeably. To add to the confusion, EIDE may also refer to the ATA-3 standard, which is similar to ATA-2, but includes additional features. ATA-3 supports the same maximum data transfer rate as ATA-2, but has SMART support and uses a 44 pin connector. While EIDE was the most common drive controller used for many years, it has since been replaced by updated versions of the ATA standard that support Ultra DMA. These include the ATA-4 through ATA-7 standards, which provide data throughput rates from 33 to 133 Mbps. Most modern computers use a completely new standard called "Serial ATA," or SATA, which supports even faster transfer rates. Updated August 28, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/eide Copy

El Capitan

El Capitan El Capitan is the 12th version of Apple OS X, also known as OS X 10.11. It was released on September 30, 2015 and succeeded OS X 10.10 Yosemite. It was followed by OS X 10.12 Sierra. OS X 10.11 was not a significant update to Apple's operating system, but it included a number of performance updates. It was the first version to include "Metal," a graphics technology developed by Apple that accelerates Core Animation and Core Graphics processing. Metal makes full use of the CPU and GPU, boost system-level rendering by up to 50 percent. El Capitan also added "System Integrity Protection" (SIP), which protects files and directories from unauthorized modification. While El Capitan was primarily a performance update to OS X, it did include a few new features. For example, it introduced "split view" for windows, which snaps a window to half the screen's width when you press and hold the green zoom button in the title bar. It also introduced text formatting in the Notes app and improved syncing with Notes and other applications that support iCloud. OS X 10.11 also added support for numerous gestures in Mail, Messages, and Safari. These gestures, such as swiping back or forward with more than one finger can done using the trackpad on a laptop or an external input device, such as Apple's Magic Trackpad. NOTE: El Capitan is a rock formation within Yosemite National Park in California. Apple likely chose this name for OS X 10.11 since El Capitan was designed to be a refinement to Yosemite (10.10), rather than a major update. Updated July 21, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/el_capitan Copy

Electron

Electron Electron is an open source, crossplatform application framework. Software developers can use Electron to create apps using a mix of web technologies like HTML, CSS, and JavaScript instead of platform-specific programming languages. Since it's a crossplatform framework, developers can create apps for Windows, macOS, and Linux using a single codebase. The Electron framework is built upon two open source software packages — Chromium and Node.js. Chromium is a browser engine (also used by web browsers like Google Chrome and Microsoft Edge) that renders an app's user interface. Node.js is a JavaScript runtime environment that executes JavaScript code and provides a direct interface with system APIs. Since Chromium and Node.js themselves are crossplatform, they provide app developers with a way to write their apps once using HTML, CSS, and JavaScript. Electron apps like Visual Studio Code are nearly identical across platforms While Electron can help make app development easier, its use does have some downsides. Since every Electron app needs to include both the full Chromium and Node.js packages in the executable file, they require more storage space and memory than a native app. These requirements increase if you run multiple Electron apps, since each app includes its own copy of Chromium and Node.js. Additionally, security updates to Chromium might not make it to an Electron app promptly, potentially leaving it vulnerable to exploits already discovered and patched in the full web browser. NOTE: Popular apps built using Electron include Microsoft Teams, Slack, Trello, Discord, and Visual Studio Code. Updated May 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/electron Copy

Email

Email Email, short for "electronic mail," is one of the most widely used features of the Internet, along with the web. It allows you to send and receive messages to and from anyone with an email address, anywhere in the world. Email uses multiple protocols within the TCP/IP suite. For example, SMTP is used to send messages, while the POP or IMAP protocols are used to retrieve messages from a mail server. When you configure an email account, you must define your email address, password, and the mail servers used to send and receive messages. Fortunately, most webmail services configure your account automatically, so you only need to enter your email address and password. However, if you use an email client like Microsoft Outlook or Apple Mail, you may need to manually configure each account. Besides the email address and password, you may also have to enter the incoming and outgoing mail servers and enter the correct port numbers for each one. The original email standard only supported plain text messages. Eventually, email evolved to support rich text with custom formatting. Today, email supports HTML, which allows emails to be formatted the same way as websites. HTML email messages can include images, links, and CSS layouts. You can also send files or "email attachments" along with messages. Most mail servers allow you to send multiple attachments with each message, but they limit the total size. In the early days of email, attachments were typically limited to one megabyte, but now many mail servers support email attachments that are 20 megabytes in size or more. Email Netiquette When composing an email message, it is important to use good netiquette. For example, you should always include a subject that summarizes the topic of the email. It is also helpful to begin each message with the recipient's name and end the message with your name or "signature." A typical signature includes your name, email address, and/or website URL. A professional signature may include your company name and title as well. Most email programs allow you to save multiple signatures, which you can insert at the bottom of an email. If you want to send an email to multiple recipients, you can simply add each email address to the "To" field. However, if the email is primarily intended for one person, you should place the additional addresses in the "CC" (carbon copy) field. If you are sending an email to multiple people that don't know each other, it is best to use the "Bcc" (blind carbon copy) field. This hides the email addresses of each recipient, which helps prevent spam. NOTE: Email was originally written "e-mail," but is now more commonly written as "email" without the dash. Updated October 31, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/email Copy

Email Address

Email Address An email address is a unique identifier for an email account. It is used to both send and receive email messages over the Internet. Similar to physical mail, an email message requires an address for both the sender and recipient in order to be sent successfully. Every email address has two main parts: a username and domain name. The username comes first, followed by an at (@) symbol, followed by the domain name. In the example below, "mail" is the username and "techterms.com" is the domain name. mail@techterms.com When a message is sent (typically through the SMTP protocol), the sending mail server checks for another mail server on the Internet that corresponds with the domain name of the recipient's address. For example, if someone sends a message to a user at techterms.com, the mail server will first make sure there is a mail server responding at techterms.com. If so, it will check with the mail server to see if the username is valid. If the user exists, the message will be delivered. Email Address Formatting While a basic email address consists of only a username and domain name, most email clients and webmail systems include names with email addresses. An email address that contains a name is formatted with the name first, followed by the email address enclosed in angle brackets, as shown below. Full Name Email can be sent to recipients with or without a name next to the email address. However, emails sent to addresses that include a name are less likely to be filtered as spam. Therefore, it is a good idea to fill in your full name when setting up an email account. Most mail clients and webmail systems will automatically include your name in your sending email address. NOTE: When manually typing an email address into the To: field, it is a good idea to use the "Full Name" formatting as shown above and include the person's name before the email address. This will help prevent the message from being incorrectly flagged as spam. Updated October 13, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/email_address Copy

Email Bomb

Email Bomb An email bomb or "mail bomb" is a malicious act in which a large number of email messages are sent to a single email address in a short period of time. The purpose of an email bomb is typically to overflow a user's inbox. In some cases, it will also make the mail server unresponsive. Email bombing is often done from a single system in which one user sends hundreds or thousands of messages to another user. In order to send the messages quickly, the email bomber may use a script to automate the process. By sending emails with a script, it is possible to send several thousand messages per minute. If performed successfully, an email bomb will leave the recipient with a pile of email messages in his or her inbox. It may also max out the recipient's email quota, preventing the user from receiving new email messages. The result is a frustrating situation where the user has to manually delete the messages. If the recipient's email client or webmail system does not allow the user to select all the unwanted messages at once, this process can take a long time to complete. Fortunately, most mail servers are capable of detecting email bombs before a large number of messages are sent. For example, if the server detects that more than ten messages are received from the same email address within one minute, it may block the sender's email address or IP address. This simple action will stop the email bomb by rejecting additional emails from the sender. Updated February 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/email_bomb Copy

Embed

Embed To embed an object or file is to place it entirely within another file. Embedding an object in a file is not just linking to it; the embedded object becomes part of the file that contains it. For example, when a word processing document contains a bitmap image, that image is embedded. When you open the document, it shows that image alongside text seamlessly without requiring an extra action to load the image separately. Once an object is embedded in a file, it's part of that file. Any edits or changes you want to make to the embedded object must happen within the file that contains it since that object is no longer an independent file. You could also delete the embedded object, update the file separately, and then re-import the newer version into the document. Multimedia content often works by embedding one type of file into another. For example, presentations often include embedded images, charts, and animations. Office documents often embed content as well, like an Excel spreadsheet placed into a Word document. Embedded content often appears on webpages. A small amount of HTML code can reference something on another website — like an individual tweet or YouTube video — and insert it into the webpage. The viewer doesn't need to click a hyperlink to open the content on a separate page; it's visible without any extra steps. However, unlike objects embedded in files, objects embedded in webpages still exist separately and are only pulled into the webpage at the time you load it. If a video is updated after you embed a link to it, the viewer will still get the newest version of it. NOTE: The term "embed" can also apply to hardware, such as a computer system that is integrated into an electronic device. Updated January 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/embed Copy

Emoji

Emoji An emoji is a small icon that can be placed inline with text. The name "emoji" comes from the Japanese phrase "e" (絵) and "moji" (文字), which translates to "picture character." Since 2010, the popularity or emojis have grown rapidly. They are commonly used in text messaging, social media, and in apps like Instagram and Snapchat. They have largely replaced emoticons as the standard way to express an emotion in a message or comment. While smiley faces are the most commonly used emojis, they can also represent people, places, animals, objects, flags, and symbols. By inserting emojis into a message, you can emphasize a feeling or simply replace words with symbols. For example, instead of writing, "I love coffee; it makes me happy," you could simply write: I ❤️ ☕️ ☺️. How Emojis Work Emojis can be inserted inline with text because each icon corresponds to a Unicode value for a specific character. Inserting an emoji is just like typing a letter or symbol on your keyboard. However, in order for the emoji to be displayed, it must be supported by the operating system (OS). In other words, the OS must recognize the Unicode value and have an emoji that corresponds to it. If no emoji is found, either a blank space or an empty box will be displayed. Apple's iOS was the first major OS to offer system-level support, so Apple's database of emojis has historically been larger than those found on Android and Windows devices. Therefore, if you send an emoji recently added to Apple's database to a user with an Android device, he may only see an empty box on his device. NOTE: While the Unicode values for emojis has become standardized across platforms, there is no central database of emoji icons. Therefore, each platform displays different images for each emoji. For example, the "Smiling face" emoji will look different on Mac, Windows, and Android devices. Additionally, the way to insert emojis varies between operating systems, such as Windows and OS X. Updated February 27, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/emoji Copy

Emoticon

Emoticon The term emoticon comes from "emotion and icon" and refers to facial expressions represented by keyboard characters. For example, the emoticon 🙂 represents a happy face and 🙁 represents a sad face. By inserting an emoticon into a message, you can help the recipient better understand the feeling you want to get across. While most emoticons represent expressions, they have branched out to symbolize many other things, such as people, animals, objects, and actions. Some emoticons are meant to be read left-to-right, while others are displayed vertically. Below are examples of different types of emoticons: 😀 - very happy o.O - confused =^.^= - cat :o3 - dog *<:o) - clown C[_] - coffee cup T.T - crying "Kaomoji" emoticons, which originated in Japan, use Japanese symbols and uncommon characters to create unique emoticons. For example, the emoticon =( ^o^)ノ___o represents a person throwing a bowling ball. The "flipping table" emoticon (╯°□°)╯︵ ┻━┻ can be used to say you are really upset. The popularity of text-based emotions led to the creation of actual icons or "emojis." Emojis (which are sometimes called emoticons) are now part of the standard character set in most mobile and desktop operating systems. This means you can insert actual smiley faces (or other expressions) instead of using a string of characters. For example, you can insert a ❤️ instead of using the text-based <3 emoticon. NOTE: If you want to have good netiquette, it's best to avoid using emoticons in formal communication. However, it's fine (and rather common) to use emoticons in text messages, informal emails, and on social media websites. Updated November 3, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/emoticon Copy

Emulation

Emulation Emulation is when one computer system is programmed to imitate another computer platform. Applications called emulators allow you to run software written for a particular hardware platform entirely within a simulated environment on another platform. One common type of emulator simulates an old or obsolete computing platform on a modern device. For example, an emulator for the Motorola 680x0 Macintosh can recreate a classic Mac inside a program window on another computer. These emulators allow you to run software, including the necessary operating system, that is not available for or compatible with modern computers. These types of emulators are popular with computer historians and preservationists. In addition to simulating old computers, emulators can imitate vintage video game consoles. Emulators are available to simulate consoles like the Super Nintendo or Sega Genesis, letting you play classic games without the original hardware. These emulators open games saved as .ROM files, which store an exact copy of the data from the original game cartridge or disc. The Nintendo Switch includes emulators for older consoles like the Super Nintendo Emulation software can also simulate specific pieces of hardware instead of an entire platform. The most common use for this type of emulation is a disc drive emulator, which allows you to mount a disk image file (for example, an .ISO file in Windows or a .DMG file on macOS) as if you were connecting an external disk or inserting it into an optical drive. Software developers often use these files to distribute software installers. Computer users may also create backup copies of their discs in case of physical damage. Compared to Virtualization Emulation is similar to virtualization — both types of technology allow a host computer to run another platform's operating system and applications within a software environment. However, virtualization is easier to achieve. Virtualization requires that the underlying hardware of the host and guest environments are the same, since virtualization software passes hardware calls from the virtual machine to the physical hardware. In an emulated environment, the emulator must instead translate those hardware calls from what the guest platform expects to what the host can provide. This requires that the host computer be considerably more powerful than the guest system it is emulating to match the same level of performance. Updated March 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/emulation Copy

Encoding

Encoding Encoding is the process of converting data from one form to another. While "encoding" can be used as a verb, it is often used as a noun, and refers to a specific type of encoded data. There are several types of encoding, including image encoding, audio and video encoding, and character encoding. Media files are often encoded to save disk space. By encoding digital audio, video, and image files, they can be saved in a more efficient, compressed format. Encoded media files are typically similar in quality to their original uncompressed counterparts, but have much smaller file sizes. For example, a WAVE (.WAV) audio file that is converted to an MP3 (.MP3) file may be 1/10 the size of the original WAVE file. Similarly, an MPEG (.MPG) compressed video file may only require a fraction of the disk space as the original digital video (.DV) file. Character encoding is another type of encoding that encodes characters as bytes. Since computers only recognize binary data, text must be represented in a binary form. This is accomplished by converting each character (which includes letters, numbers, symbols, and spaces) into a binary code. Common types of text encoding include ASCII and Unicode. Whenever data is encoded, it can only be read by a program that supports the correct type of encoding. For audio and video files, this is often accomplished by a codec, which decodes the data in real-time. Most text editors support multiple types of text encoding, so it is rare to find a text file that will not open in a standard text editor. However, if a text editor does not support the encoding used in a text document, some or all of the characters may appear as strange symbols rather than the intended text. Updated September 23, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/encoding Copy

Encryption

Encryption Encryption is the process of converting data to an unrecognizable or "encrypted" form. It is commonly used to protect sensitive information so that only authorized parties can view it. This includes files and storage devices, as well as data transferred over wireless networks and the Internet. You can encrypt a file, folder, or an entire volume using a file encryption utility such as GnuPG or AxCrypt. Some file compression programs like Stuffit Deluxe and 7-Zip can also encrypt files. Even common programs like Adobe Acrobat and Intuit TurboTax allow you to save password-protected files, which are saved in an encrypted format. An encrypted file will appear scrambled to anyone who tries to view it. It must be decrypted in order to be recognized. Some encrypted files require a password to open, while others require a private key, which can be used to unlock files associated with the key. Encryption is also used to secure data sent over wireless networks and the Internet. For example, many Wi-Fi networks are secured using WEP or the much stronger WPA encryption. You must enter a password (and sometimes a username) to connect to a secure Wi-Fi network, but once you are connected, all the data sent between your device and the wireless router will be encrypted. Many websites and other online services encrypt data transmissions using SSL. Any website that begins with "https://," for example, uses the HTTPS protocol, which encrypts all data sent between the web server and your browser. SFTP, which is a secure version of FTP, encrypts all data transfers. There are many different types of encryption algorithms, but some of the most common ones include AES (Advanced Encryption Standard), DES (Data Encryption Standard), Blowfish, RSA, and DSA (Digital Signature Algorithm). While most encryption methods are sufficient for securing your personal data, if security is extremely important, it is best to use a modern algorithm like AES with 256-bit encryption. Updated November 11, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/encryption Copy

End User

End User An end user is the person that ultimately uses a piece of software or hardware. They are the last person at the end of the product development process after it has been designed, developed, tested, and delivered. Good product designers consider their needs during product development to create something they'll find helpful and pleasant to use. A product's end user is typically uninvolved in the development process, and in many cases, they are uninvolved in its administration and maintenance as well. However, some developers reach out and recruit some of their end users to help in the development process as beta testers. Beta testing allows some end users to use an early version of the product as they normally would, helping the developer find bugs while providing suggestions to improve the user experience. The end user is not always the product's customer, especially when it comes to business software. The person responsible for purchasing decisions is often in a different role than the people who will be using it regularly. A product might also have several different kinds of end user to consider. For example, an e-learning platform's end users consist of the instructors creating and teaching the material and the users taking the lessons, while the customer is the company's training manager who makes the purchasing decision. IT staff, meanwhile, administer and support the software but are not end users. Updated August 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/enduser Copy

Endianness

Endianness Endianness is a computer science term that describes how data is stored. Specifically, it defines which end of a multi-byte data type contains the most significant values. The two types of endianness are big-endian and little-endian. Big-Endian Big-endian is the most common way to store binary data. It places the most significant (or largest) value first, followed by less significant values. For example, the big-endian representation of the integer 123 places the hundreds value (1) first, followed by the tens value (2), then the ones value (3), or [123]. Little-Endian Little-endian stores the least-significant value first, followed by increasingly more significant values. For example, the number 123 in little-endian notation is [321]. The text string "ABC" is represented as [CBA]. Endian Conversion In most cases, developers do not have to specify endianness since the compiler generates the correct type of data for a specific platform. However, a program may need to process external input, such as a file format that stores data with a different endianness. In this case, the data must be converted from little-endian to big-endian or vice versa. Converting endianness is not as simple as reversing the data. The bytes, rather than the bits, must be reversed. In other words, each byte (or block of eight bits) must remain the same, but the order of the bytes is changed. This can be explained using the hexadecimal or binary representation of data. For example, the integer 41,394 is represented in big-endian notation as: hexadecimal: A1B2 binary: 1010000110110010 Converting this data to little-endian does not reverse the data, but rather the individual bytes within the data. Hexadecimal uses two digits to represent each byte – [A1][B2], while binary uses eight digits – [10100001][10110010]. Therefore, the little-endian representation of 41,394 is: hexadecimal: B2A1 binary: 1011001010100001 NOTE: Some processors can fetch data as either big-endian or little-endian with no conversion required. This is called bi-endian data access. Updated September 27, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/endianness Copy

Enterprise

Enterprise In the IT world, enterprise refers to large businesses and organizations. It is often used to describe hardware, software, and technical services that are aimed at business customers. Enterprise hardware includes telecommunications systems such as large-scale network equipment, telephone systems, and SIP devices. It also includes server farms and infrastructure used for cloud computing. Though enterprise hardware often describes large-scale hardware systems, it may also refer to an individual device, such as as workstation or laptop designed specifically for business purposes. Enterprise software, also called enterprise application software (EAS), refers to software developed for businesses, non-profit organizations, and educational institutions. Examples of enterprise programs include accounting software like Intuit QuickBooks, project management programs like Microsoft Project, database management systems like Microsoft SQL Server, and content management systems like IBM Information Management solutions. Some applications are offered in multiple editions and may include an "Enterprise" edition that is designed specifically for large organizations. Examples of technical services that fall under the enterprise category include network installation and monitoring, business telephony services, and private and public cloud systems. Small-scale enterprise services include VPN setup, data backup, and web hosting. These services often include a combination of enterprise hardware and software that companies may either purchase, lease, or rent as part of the service agreement. Updated July 31, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/enterprise Copy

Enum

Enum Enum, short for "enumerated," is a data type that consists of predefined values. A constant or variable defined as an enum can store one of the values listed in the enum declaration. Enums are used in both source code and database tables. For example, an enum that stores multiple colors may be declared in C# as follows: enum Color { white, red, green, blue, black }; A column in a MySQL database table may be defined as below: Color ENUM ('white', 'red', 'green', 'blue', 'black') A variable or database value defined as Color can be assigned any of the five colors listed in the enum declarations above. If assigned any other value besides the one of the five colors above, it will remain undefined and may produce an error depending on the context. Also, enum variables may only contain one value. If a variable may need to store one or more predefined values, it should be defined as a SET instead. Enums provide a highly structured way to store data since they can only store a single pre-defined value. While this helps ensure data integrity, it also limits their flexibility. Therefore, enum variables are most appropriate for storing discrete data that can only be one of a few possible values. Examples include colors, sizes, categories, months, and days of the week. Variables that contain more varied data, such as names and places, should be defined as strings. Updated November 25, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/enum Copy

EOL

EOL Stands for "End Of Life." EOL, pronounced "E-O-L," is commonly used in information technology to describe a product that is no longer maintained or supported. It may refer to either hardware or software. Hardware Hardware companies often label products as EOL several years after production has ended. When a device reaches EOL status, the company ends official support, no longer providing technical help or hardware repairs. Even if a company labels a product as EOL (or marks it "obsolete"), it may be possible to have the product repaired by a third-party service center. Hardware products typically reach EOL status 5 to 10 years after they are produced. The length of time depends on both the product and the manufacturer. Well-established companies typically provide longer support periods. Products that are frequently refreshed, such as smartphones and laptops, may be "EOLed" more quickly than other products. Devices that are only updated every few years may be supported for more than a decade. Software EOL status also applies to software, such as operating systems and software applications. When software is marked as EOL, it will still function. However, the developer no longer provides updates or technical support. In most cases, the development team will not fix additional bugs and the software may be incompatible with new hardware devices. Software products may be EOLed based on time (such as three years after the last update) or based on major version releases. For example, a developer may only support the two most recent versions of a software program and mark older versions as EOL. Both hardware and software companies use EOL as a means of clarifying which products they currently support. It allows hardware manufacturers to stop producing replacement parts for old products and focus on supporting new models. For software developers, it enables them to focus on new and future versions of their programs, rather than supporting apps released several years ago. NOTE: EOL also stands for "End Of Line," a computer science term that marks the end of a line of text. Updated March 27, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/eol Copy

Epoch Time

Epoch Time Epoch time is a method of representing time and dates by counting the number of seconds elapsed since a start date. This start date is referred to as the beginning of an epoch. Computer systems use epoch time to track the current time and date and to set created and modified timestamps. Since most computer servers use Unix or a Unix-like operating system, the term "epoch time" is most often used to refer to the Unix epoch that began at 12:00:00 AM UTC on January 1, 1970. However, other computer systems track time using other starting dates. Classic MacOS started its epoch at midnight UTC on January 1, 1904; modern Windows systems track epoch time from a start date of January 1, 1601. Windows computers also increment epoch time every 100 nanoseconds instead of every second. Examples of Unix epoch timestamps and the corresponding date and time Errors in a file or file system may occasionally reset a file's created or modified date to 0. When this happens, your computer's operating system will display the affected file as created at midnight UTC on January 1, 1970. Coordinated Universal Time (UTC) is based on Greenwich Mean Time (GMT), so if your local time zone is west of GMT, the date adjusts a few hours back to December 31, 1969. NOTE: Unix systems that still use a signed 32-bit integer to track epoch time will reach the maximum possible value on January 19, 2038. Updating affected software to use a 64-bit value instead will resolve this problem since a 64-bit value can track more than 500 billion years worth of seconds. Updated March 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/epoch_time Copy

EPS

EPS Stands for "Encapsulated PostScript." EPS is an image file format that includes a vector image defined using the PostScript language, as well as a raster preview image that represents the file when placed in another document. The EPS format has long served as an industry-standard format for vector graphics that helps graphic designers create illustrations, logos, and page layouts. Modern image formats like PDF have gradually replaced EPS files in day-to-day use. While it still serves as a legacy format for professional applications and printers, it is not as widely supported as it once was. Since EPS files contain vector images, they maintain a high level of detail and crisp edges when printed at large sizes and are ideal for saving complex digital artwork, logos, and text. EPS files support both RGB and CMYK color, allowing accurate color representation on screen and in print. They use a form of lossless image compression that preserves detail while reducing file size. Graphic designers can create EPS files using most vector illustration applications, including Adobe Illustrator and CorelDRAW. However, you cannot edit an EPS file once it has been created — instead, you must edit the original vector artwork file and export a new EPS image. Adobe created the EPS file format in the 1980s to allow graphic designers to embed high-quality vector graphics in desktop publishing page layouts. Computers of the time were not powerful enough to be able to quickly draw these images on screen, which made it difficult to see how an image would look next to other elements on the page. To solve that problem, EPS files "encapsulate" a low-resolution bitmap image in a separate data fork that applications can use as a preview image on the screen. When sent to a PostScript-compatible printer, the printer ignores the preview bitmap and instead uses the PostScript data to print at the highest possible quality. File Extension: .EPS Updated October 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/eps Copy

Equalizer

Equalizer An equalizer or "EQ" is a sound engineering tool that adjusts the output of different frequencies. It allows you to cut or boost the levels of specific frequency ranges, providing more granular control of the sound volume. For example, an EQ enables you to amplify low "bass" frequencies while not affecting sounds in the mid or high "treble" range. Both hardware and software equalizers are available. A hardware EQ has physical knobs or sliders and processes analog audio signals using internal circuitry. A software EQ is controlled with a mouse or keyboard and processes digital audio through a computer. Several types of equalizers exist, but the most common are graphic and parametric. Graphic Equalizer The most common and well-recognized equalizer is the graphic EQ. It includes sliders that allow you to cut or boost specific frequencies. A typical graphic EQ may have anywhere from 3 to 31 bands. The more bands, the more granular the control. For example, a simple 3-band EQ may have "bass," "mid," and "treble," sliders. A 7-band EQ may have sliders for specific frequencies, such as 50 Hz, 120 Hz, 300 Hz, 800 Hz, 2 kHz, 5 kHz, and 12 kHz. Some EQs may only provide an output range of +/- 6 dB, while others allow you to cut or boost the level by more than 20 dB. Increasing the output of a specific frequency too much may cause distortion (or "clipping" in a digital audio signal). Therefore, it is usually best to decrease the levels of individual frequencies and increase the overall volume. Some EQs include a "preamp" slider that can boost the input levels, which raises the overall amplitude of the output signal. Parametric Equalizer A parametric EQ uses knobs instead of sliders. A semi-parametric EQ allows you to select the frequency and gain of each adjustment. A fully-parametric EQ enables you to modify the frequency, gain, and "Q" (or width) of each setting. Most graphic equalizers automatically adjust the frequencies around each slider, which "rolls off" the boost or cut to the surrounding frequencies. The "Q" setting of a parametric EQ allows you to control how quickly the adjustment rolls off. For example, a narrow Q targets a small frequency range, such as a snare drum hit. A wide Q targets a broad frequency range, such as a string section. Choosing an Equalizer The visual nature of graphic EQs makes them easy to use and understand. For example, a "smiley-face" EQ, which boosts the lows and highs and cuts the mids, is easy to recognize. Many consumer audio programs include graphic equalizers because they are simple to use. Fully-parametric EQs have a less-intuitive interface, but they provide more control over each frequency adjustment. Therefore, most audio professionals prefer to work with parametric equalizers. Other types of equalizers include filters and shelving EQs. Filters, such as low-pass and high-pass filters, roll off low and high frequencies, respectively. These are useful for removing too much bass or treble from an audio signal. A shelving EQ is similar to a low or high-pass filter, but it can also boost a low or high-end frequency. NOTE: Many software and hardware EQs have a "bypass" setting. This on/off switch provides a simple way to toggle between the original sound and the audio signal processed by the equalizer. Updated August 20, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/equalizer Copy

Ergonomics

Ergonomics Ergonomics is the study of how humans interact with manmade objects. The goal of ergonomics is to create an environment that is well-suited to a user's physical needs. While ergonomics is relevant in many areas, it is commonly applied to the workplace environment. For example, ergonomics is often used to create comfortable workstations for employees. This may involve choosing customized desks and chairs that fit each individual's body type. It may also include providing employees with ergonomic keyboards and wrist rests that provide better typing posture. Keyboards, mice, and other devices that are developed using ergonomics are said to have "ergonomic design." This type of design is produced by ergonomics studies that help engineers better understand the way people use devices. For example, an ergonomic chair may help support your lower back and prevent you from slouching. An ergonomic desk may adjust to the appropriate height, so you can sit up straight and view your monitor at the right level. An ergonomic keyboard may have a small slope on each side that fits your hand position more naturally than a flat keyboard. When multiple ergonomic objects are used together, it creates an ergonomically sound environment. This is especially important in office workplaces, where people often spend the majority of their day sitting in front of a computer. Small changes like adjusting a monitor to the correct height may prevent neck pain. A simple wrist rest may prevent carpal tunnel syndrome, which is caused by a pinched nerve in the wrist. By paying attention to ergonomics and making the necessary changes, you avoid the aches and pains that bad posture and receptive stress injuries can cause. NOTE: Even if you have an ergonomically friendly workstation, it is still helpful to stand up and walk around every half hour. Updated April 29, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ergonomics Copy

ERP

ERP Stands for "Enterprise Resource Planning." ERP is an umbrella term that covers all the resources of an enterprise business. When used as an adjective, ERP describes specific products or processes, e.g., ERP software, ERP management, and ERP insights. When used as a noun, ERP refers to the integration of enterprise business processes. For example, it may encompass the accounting, human resource, and sales operations of an enterprise organization. ERP Software In the IT world, ERP commonly refers to business management software. Most ERP software is cloud-based, meaning it runs on a server and is accessible over the Internet. Benefits of cloud-based software include: Centralized data storage Real-time updates Consistent look and feel for all users Easily upgradable Remotely accessible These benefits are significant to large organizations, such as enterprise companies. A centralized system makes it possible for global businesses to operate efficiently. For example, a project manager in California can check supply chain status in Taiwan and send updates to a team in Singapore using a single ERP system. Consolidating processes also makes it easier for businesses to analyze data and adapt as needed. For example, ERP software that integrates e-commerce and business intelligence (BI) can provide customer insights without requiring third-party analytics tools. Examples of ERP Software SAP Business One Oracle NetSuite Microsoft Dynamics Updated March 10, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/erp Copy

eSATA

eSATA Stands for "External Serial Advanced Technology Attachment." eSATA is a variation of the SATA interface that supports external storage devices. It was standardized in 2004 and uses the same pins and same protocols as internal SATA. However, it provides a slightly different, more rugged connector. The eSATA standard also supports a cable length of two meters compared to the one meter cable length supported by SATA. Since eSATA uses the same protocols as SATA, an eSATA drive offers the same high-speed data transfer rates as and internal SATA drive. For example, an eSATA 3.0 drive can transfer data at 6 Gbps or 4.8 Gbps, taking into account the data encoding process. This is significantly faster than Firewire 800 (800 Mbps) and USB 2.0 (480 Mbps) and is on par with USB 3.0 (5 Gbps). Because eSATA offers such fast transfer rates, it has been a popular external hard drive interface used by video editors, audio producers, and other media professionals. While eSATA is one of the fastest interfaces available, it is surpassed by both Thunderbolt (10 Gbps) and Thunderbolt 2.0 (20 Gbps), which are alternatives to eSATA. eSATAp Unlike Firewire, USB, and Thunderbolt, the eSATA interface does not provide power to connected devices. Therefore, all drives connected via eSATA must include a separate power connector to provide electricity to the device. A variation of eSATA, called eSATAp or eSATA USB Hybrid Port (EUHP), combines four USB pins and two 12-volt power pins into the eSATA connector. An eSATAp port supports both eSATA and USB connectors. It allows connected devices to draw power from the computer's power supply, eliminating the need for a separate power cable. Updated March 10, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/esata Copy

Escape Key

Escape Key The Escape key is located in the upper-left corner of a computer keyboard. It typically resides to the left of the Function keys (F1, F2, F3, etc.) and above the tilde (~) key. Most often, is is labeled with the abbreviation "esc." The Escape key has many purposes, which have evolved over time. Most uses share the common action of exiting or "escaping" an operation. The Escape key is often used to quit, cancel, or abort a process that is running on a computer. Some specific examples of Escape key functions include: Closing a pop-up menu - You can press Escape to collapse the Windows Start Menu or the File menu in macOS. Selecting "Cancel" in a dialog box - When prompted with options such as "OK" and "Cancel," the Enter key is often used to select "OK," while the Escape key selects "Cancel." Reverting changes to a filename - If you are in the middle of changing the name of a file or folder, pressing Escape instead of Enter will revert to the unedited name. Exiting full screen mode - Watching a full screen YouTube video on your computer? Press Escape to return to the webpage view. Ending a presentation - Similar to exiting full-screen mode, pressing Escape will immediately end a presentation in programs like PowerPoint and Keynote. Pausing a game - Escape is the default pause button in several video games. Pressing Escape may bring up the in-game menu as well. Hiding the cursor - Several programs, as well as macOS, make the cursor disappear when you press Escape While most applications don't require the use of the Escape (esc) key, it can be a handy shortcut for stopping or canceling operations on your computer. Updated May 25, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/escape_key Copy

eSIM

eSIM An eSIM, short for "embedded SIM," is SIM chip or "UICC" embedded in a mobile device. It provides the same functionality as a SIM card, but is not removable. What is a SIM? A SIM (Subscriber Identification Module) is a unique number that identifies a device on a cellular network. When you activate a smartphone, tablet, or another device on a cellular network, the mobile provider links the SIM number to your account. When activating a smartphone, the cellular company also links the SIM to your mobile phone number. eSIM vs SIM Card Unlike SIM cards, which are removable and can be moved between devices, an eSIM is integrated into a device's hardware. Since eSIMs cannot be removed, they are writable and can be reprogrammed to work with different cellular providers. They can also store more than one SIM number at a single time. Below are some advantages of eSIMs over traditional SIM cards: Smaller form factor - Generally, eSIMs require less than 1/3 the space of even the smallest nano-SIM cards. Electronically programmable - You don't need to go to a physical store or receive a new SIM card in the mail when moving to a new cellular provider. You can add the eSIM number electronically. Support for multiple SIMs - An eSIM makes it possible to use one device with multiple cellular providers at the same time. Common uses include separating personal and business calls and using local cellular plans when traveling internationally. Improved security - If your phone is lost or stolen, another person will not be able to use your SIM card. The first smartphones to use eSIMs hit the market around 2018. Many popular models now include eSIM chips. Since embedded SIMs only work with mobile providers that support eSIM technology, many eSIM devices also have a physical SIM card slot. Updated June 24, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/esim Copy

Esports

Esports Esports (pronounced "e-sports") is a general term used to describe video game competitions. Much like athletic sporting events, Eports games are often played before live audiences and may be broadcast over the Internet as well. Esports dates back to the 1980s when gaming tournaments took place in arcades. In the 1990s, console gaming became more popular and video game competitions started being held in auditoriums and other large arenas. In the 2000s, as computing gaming grew in popularity, PC esports became more popular as well. Over the past several years, the Internet has fostered a new era in esports, as players are now able to compete in gaming competitions remotely. An esports match is performed much like an athletic sporting event. Players must follow certain rules and a referee officiates the game. Sportscasters typically commentate the game, explaining what is happening in real-time. While esports games may be narrated by a single commentator, most large esports events are presented by two or more commentators at once. The rise of esports has produced a large number of professional video gamer players (or "pro gamers"). These players compete regularly in professional tournaments with cash prizes. Esports tournaments are typically sponsored by technology companies, but may also generate revenue through selling live tickets and online viewing subscriptions. Esports encompasses a wide range of video game genres. Popular real-time strategy (RTS) games include StarCraft 2, Dota, and League of Legends. Common first-person shooters include Call of Duty: Modern Warfare and Half Life: Counter-Strike. Additionally, there are a few console games that are played in esports competitions, including Guitar Hero, Halo 3, and Super Street Fighter IV. Updated August 10, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/esports Copy

Ethernet

Ethernet Ethernet, pronounced "E-thernet" (with a long "e"), is the standard way to connect computers on a network over a wired connection. It provides a simple interface and for connecting multiple devices, such computers, routers, and switches. With a single router and a few Ethernet cables, you can create a LAN, which allows all connected devices to communicate with each other. A standard Ethernet cable is slightly thicker than a phone cable and has an RJ45 connector on each end. Ethernet ports look similar to telephone jacks, but are slightly wider. You can plug or unplug devices on an Ethernet network while they are powered on without harming them. Like USB, Ethernet has multiple standards that all use the same interface. These include: 10BASE-T - supports up to 10 Mbps 100BASE-T - supports up to 100 Mbps 1000BASE-T (also called "Gigabit Ethernet") - supports up to 1,000 Mbps Most Ethernet devices are backwards compatible with lower-speed Ethernet cables and devices. However, the connection will only be as fast as the lowest common denominator. For example, if you connect a computer with a 10BASE-T NIC to a 100BASE-T network, the computer will only be able to send and receive data at 10 Mbps. If you have a Gigabit Ethernet router and connect devices to it using 100BASE-T cables, the maximum data transfer rate will be 100 Mbps. While Ethernet is still the standard for wired networking, it has been replaced in many areas by wireless networks. Wi-Fi allows you to connect your laptop or smartphone to a network without being tethered to the wall by a cable. The 802.11ac Wi-Fi standard even provides faster maximum data transfer rates than Gigabit Ethernet. Still, wired connections are less prone to interference and are more secure than wireless ones, which is why many businesses and organizations still use Ethernet. NOTE: Ethernet is also known by its technical name, "IEEE 802.3." Updated November 20, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ethernet Copy

EUP

EUP Stands for "Enterprise Unified Process." EUP is a software development methodology that helps companies create software in an structured and organized manner. It is an extension of the Rational Unified Process (RUP), adding two new development phases -- Production and Retirement. Since the RUP includes four phases, the EUP consists of six phases: Inception - The idea for the project is stated. The development team determines if the project is worth pursuing and what resources will be needed. Elaboration - The project's architecture and required resources are further evaluated. Developers consider possible applications of the software and costs associated with the development. Construction - The project is developed and completed. The software is designed, written, and tested. Transition - The software is released to the public. Final adjustments or updates are made based on feedback from end users. Production - Software is kept useful and productive after being released to the public. Developers make sure the product continues to run on all supported systems and support staff provides assistance to users. Retirement - The product is removed from production, often called "decommissioning." It can either be replaced or simply no longer supported. The release of a new version of software often coincides with the retirement phase of an older version. For more information on the EUP, visit the Enterprise Unified Process Home Page. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/eup Copy

Exabyte

Exabyte An exabyte is 1018 or 1,000,000,000,000,000,000 bytes. One exabyte (abbreviated "EB") is equal to 1,000 petabytes and precedes the zettabyte unit of measurement. Exabytes are slightly smaller than exbibytes, which contain 1,152,921,504,606,846,976 (260) bytes. The exabyte unit of measure measurement is so large, it is not used to measure the capacity of data storage devices. Even the storage capacity of the largest cloud storage centers is measured in petabytes, which is a fraction of one exabyte. Instead, exabytes are used to measure the sum of multiple storage networks or the amount of data transferred over the Internet in a certain amount of time. For example, several hundred exabytes of data are transferred over the Internet each year. NOTE: View a list of all the units of measurement used for measuring data storage. Updated December 7, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/exabyte Copy

Exbibyte

Exbibyte An exbibyte (EiB) is a unit of data storage equal to 260 bytes, or 1,152,921,504,606,846,976 bytes. It measures data at the same scale as an exabyte, but has a binary prefix instead of a decimal prefix. An exbibyte is slightly larger than an exabyte (EB), which is 1018 bytes (1,000,000,000,000,000,000 bytes); an exbibyte is roughly 1.1529 exabytes. An exbibyte is 1,024 pebibytes, and 1,024 exbibytes make up a zebibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a measurement could mean different numbers. When computer engineers first began using the term kilobyte to refer to a binary measurement of 210 bytes (1,024 bytes), the difference between binary and decimal measurements was roughly 2%. As file sizes and storage capacities expanded, so did the difference between the two types of measurements — an exbibyte is more than 15% larger than a exabyte. Using different terms to refer to binary and decimal measurements helps to address the confusion. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 22, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/exbibyte Copy

Excel

Excel Microsoft Excel is a spreadsheet application and one of the core apps in the Microsoft 365 productivity software suite (formerly called Microsoft Office). It helps its users analyze data from multiple sources, run complex calculations, and visualize data using charts and graphs. First introduced in 1985, Excel is now the most widely-used spreadsheet app and is available as a desktop app for Windows and macOS, a mobile app for iOS and Android, and a web app. Like other spreadsheet applications (including Apple Numbers, Google Sheets, or the discontinued Lotus 1-2-3), Excel helps you organize and analyze data. Spreadsheets consist of rows and columns of individual cells, which may contain data or formulas that perform calculations on that data. Each cell uses the intersection of a column (labeled with letters) and a row (labeled with numbers) for identification; for example, cell B5 is the intersection of the second column (B) and the fifth row. Excel supports spreadsheets with up to 16,384 columns (A through XFD) and 1,048,576 rows. An Excel spreadsheet with a chart summarizing some of its data Excel includes a large set of features that help make it the most popular spreadsheet app on the market. It integrates with the rest of the Microsoft 365 suite, which makes it easy to move data between an Excel spreadsheet and an Access database or insert graphs into Word documents and PowerPoint presentations. It also includes macro support, which allows you to write and record macros to automate repetitive tasks. It can connect to many different data sources, from special services like Microsoft Power BI to any ODBC-compatible database, easily importing data into Excel for analysis. Finally, it includes the same collaboration tools as the rest of the apps in the suite, allowing you to work alongside other people on the same spreadsheet whether you're on a computer, a mobile device, or using the web app. File extensions: .XLSX, .XLS, .XLTX, .XLT Updated September 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/excel Copy

Exception

Exception An exception (short for "exceptional event") is an error or unexpected event that happens while a program is running. When an exception occurs, it interrupts the flow of the program. If the program can handle and process the exception, it may continue running. If an exception is not handled, the program may be forced to quit. Multiple programming languages support exceptions, though they are used in different ways. For example, exceptions are an integral part of the Java language and are often to control the flow of a program. Java includes an Exception class, which has dozens of subclasses, such as TimeoutException, UserException, and IOException. Subclasses like IOException contain more specific exceptions like FileNotFoundException and CharacterCodingException that can be "thrown" if a file is not found or the character encoding of a string is not recognized. Other languages only use exceptions to catch fundamental runtime errors, such as failure allocating memory or system-level errors. For example, a C++ program may throw the bad_alloc exception when memory cannot be allocated and the system_error exception when the operating system produces an error. Exception Handling A well-written computer program checks for exceptions and handles them appropriately. This means the developer must check for likely exceptions and write code to process them. If a program handles exceptions well, unexpected errors can be detected and managed without crashing the program. Exceptions are "thrown" when then occur and are "caught" by some other code in the program. They can be thrown explicitly using the throw statement or implicitly within a try clause. Below is an example of "try / catch" syntax in Java. The following code attempts to divide by zero, but throws an ArithmeticException exception and returns 0 as the result.  1. int a = 11;  2. int b = 0;  3. int result = 0;  4. try {  5.   int c = a / b;  6.   result = c;  7. } catch(ArithmeticException ex) {  8.   result = 0;  9. } 10. return result; An exception is thrown on line 5 (when 11 is divided by 0), so the remainder of the try statement (line 6) is not executed. Instead, the exception is caught on line 7 and a result of 0 is returned. Updated May 6, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/exception Copy

Executable File

Executable File An executable file is a type of computer file that runs a program when it is opened. This means it executes code or a series of instructions contained in the file. The two primary types of executable files are 1) compiled programs and 2) scripts. On Windows systems, compiled programs have an .EXE file extension and are often referred to as "EXE files." On Macintosh computers, compiled programs have an .APP extension, which is short for application. Both types of executable files have been compiled from source code into binary machine code that is directly executable by the CPU. However, EXE files only run in Windows, while APP files only run in Mac OS X. This is because the code is executed by the operating system and therefore must be compiled in a format that the operating system can understand. Uncompiled executable files are often referred to as scripts. These files are saved in a plain text format, rather than a binary format. In other words, you can open a script file and view the code in a text editor. Since scripts do not contain executable machine code, they require an interpreter to run. For example, a PHP file can execute code only when run through a PHP interpreter. If a PHP interpreter is not available, the PHP script can only be opened as a text file. Since executable files run code when opened, you should not open unknown executable files, especially ones received as email attachments. While compiled executable files are the most dangerous, script files can run malicious code as well. For example, VBScript (.VBS) files can run automatically on Windows systems through the built-in Windows Script Host. Likewise, AppleScript (.SCPT) files can run through the AppleScript interpreter included with Mac OS X. Therefore, if you come across an unknown file and are unsure if it contains executable code, it is best not to open it. Below is a list of common file extensions used for executable files on Windows and Macintosh systems. Windows file extensions: .EXE, .COM, .BAT, .VB, .VBS, .WSF, .PIF Macintosh file extensions: .APP, .SCPT, .APPLESCRIPT Updated February 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/executable_file Copy

exFAT

exFAT Stands for "Extensible File Allocation Table." exFAT is a file system designed for removable flash drives. It allows for larger file sizes and storage volumes than FAT32. Most operating systems, game consoles, cameras, and other devices support exFAT, making it a cross-platform file system. It is the default file system for SDXC cards and many USB flash drives. FAT file systems like exFAT utilize a file allocation table, an index file that tracks the files stored on the disk volume. The file system creates this table when a disk volume is first formatted, then adds and removes entries as files are made, edited, and deleted. The FAT32 file system is limited to a maximum file size of 4 GB and volume size of 2 TB; exFAT uses a different file allocation table design that instead supports file and volume sizes up to 128 PB (or more than 128 million GB). While it supports hard disk drives, exFAT is primarily designed for removable flash drives, giving it advantages over other file systems when used for portable media. It is supported by multiple operating systems, as well as game consoles, cameras, and many other devices. NTFS, for example, is only supported by Windows, while APFS and HFS are only supported by macOS. Those file systems also have more advanced features like journaling, permissions, and encryption that add overhead and require more computational resources when compared to lightweight FAT file systems. Updated October 11, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/exfat Copy

EXIF

EXIF Stands for "Exchangeable Image File Format." EXIF is a standard means of tagging image files with metadata, or additional information about the image. It is supported by both the TIFF and JPEG formats, but is most commonly seen in JPEG images captured with digital cameras. When you take a picture with a digital camera, it automatically saves EXIF data with the photo. This typically includes the exposure time (shutter speed), f-number (aperture), ISO setting, flash (on/off), and the date and time. Some cameras may save additional EXIF data, such as the brightness value, white balance setting, metering mode, and sensing method. Many smartphones and some newer digital cameras also include GPS information, which is used for "geotagging" photos. When you view a digital photo on your computer, the EXIF data is typically hidden by default. Therefore, you may need to select an option such as "Get Info," "View Properties," or "Show Inspector" from within your photo viewing application in order to view the EXIF data. In Photoshop, for example, you can select File → File Info…, then click the "Advanced" tab to view the EXIF properties. NOTE: The PNG and GIF image formats do not support EXIF data. Updated May 28, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/exif Copy

Expansion Card

Expansion Card An expansion card is a printed circuit board that can be inserted into an expansion slot on a computer's motherboard to add new functionality. Some expansion cards can improve a computer's processing capabilities, while others can add extra input and output connections. Most desktop tower computers include several PCI Express expansion slots in multiple sizes. Common categories of expansion cards are listed below: Graphics cards (also called video cards) are the most common type of expansion card. A graphics card includes a special GPU that performs high-speed calculations to render 3D graphics for gaming, 3D modeling, and animation. Sound cards add input and output jacks to connect external audio devices like speakers and microphones. They also include digital-to-analog converters (DAC) to convert digital audio to analog audio signals, as well as analog-to-digital converters (ADC) to convert input. Network interface cards add ethernet ports or Wi-Fi antennas to provide a network connection. Using multiple NICs allows a computer to bridge multiple networks together. Storage controller cards add extra internal storage connections, like SATA ports or M.2 slots. USB and Thunderbolt controller cards add extra USB and Thunderbolt ports to a computer, allowing you to connect more external devices. Video capture cards add dedicated video input ports that allow you to record and stream video footage from another device. PCI Express slots come in several sizes — x1, x4, x8, and x16 — and expansion cards are available for all of them. PCI Express x1 slots are the smallest and offer the least total throughput; network interface cards and sound cards are the most common cards that use this slot. High-bandwidth network cards, video capture cards, and M.2 and NVMe storage adapters often fit into a PCI Express x4 slot. Some low-power graphics cards use the PCI Express x8 slot, but most use an x16 slot to access as much throughput as possible. An Intel dual-port NIC using a PCI Express x4 interface Only desktop tower computers (and some rack-mounted ones) have expansion slots. Some laptops in the 1990s supported expansion cards using the PCMCIA interface to add NICs and modems, but the introduction of USB ports and integrated Wi-Fi made that interface obsolete. Likewise, all-in-one computers like the Apple iMac use external expansion ports like USB and Thunderbolt for expansion. Updated June 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/expansion_card Copy

Exploit

Exploit An exploit is a program, piece of code, or set of commands designed to take advantage of a vulnerability in a software system. Hackers use exploits to gain access to a system, elevate that access to administrator (or root) permissions, then use that access to install malware, extract information, or disrupt operations. Vulnerabilities in a software system can take many forms. Some vulnerabilities are the result of bugs or oversights in the software code; other vulnerabilities come from mistakes made by a system administrator during configuration. Likewise, the ways an exploit can target systems can also take many forms. Some exploits target vulnerabilities in web browsers, email clients, or other software that opens files from the Internet. Others spread on their own over a computer network, infecting the first computer by other means, then scanning nearby computers and automatically running the exploit on vulnerable computers. Once an exploit is known to the developer of the vulnerable software, they can issue a hotfix to patch the hole. The longer a vulnerability is known, the greater the number of hackers that have access to it, so it is important to regularly install security updates and antivirus definitions. An exploit used by hackers before it is known to the affected software's developers is known as a zero-day exploit since the developer had zero days of notice to issue a patch and must work quickly to fix it. NOTE: Many website and software companies operate bug bounty programs to encourage ethical hackers to find and report vulnerabilities before they can be exploited by criminal hackers. Updated November 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/exploit Copy

Export

Export Export is a command usually found within a program's File menu (File → Export...). It is similar to the File → Save As... command, but is typically used for more specific purposes. For example, instead of simply saving a file with a different name or different format, "Export" might be used to save parts of a file, create a backup copy of a file, or save a file with customized settings. Since the Export command is only used for specific purposes, it is not available in all applications. For example, most text editors do not include an Export feature because text documents do not contain anything specific to export. Instead, the Export command is often found in multimedia programs, like photo and video editing applications. For example, Adobe Photoshop includes an Export command that allows users to export vector paths within an image as Adobe Illustrator files. Apple QuickTime Player allows users to export videos to multiple formats and provides advanced options for choosing the compression type, frame rate, video dimensions, and other settings. QuickTime's "Save As..." command only allows users to save movies in the standard QuickTime (.MOV) format. A program's Export command can be a useful tool if you need to save elements within a file or if you would like to save a version of the file with customized settings. Therefore, if a program's "Save As..." command doesn't have the options you are looking for, select File → Export... and you might find exactly what you need. Many programs also include an Import command, which is used for opening specific types of files. Updated September 2, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/export Copy

Extended Reality

Extended Reality Extended reality, abbreviated "XR," is an umbrella term for technologies that replace or augment the real world with a computer-generated simulation. It encompasses the three immersive technologies of virtual reality (VR), augmented reality (AR), and mixed reality (MR). Virtual reality fully immerses a person in a digitally-simulated environment. Users of virtual reality wear a special headset, or HMD, that fully blocks out their surroundings. A VR headset allows people to experience either 360º video or a fully-computer-generated setting. Augmented reality overlays computer-generated imagery on a real-world environment. AR uses a smartphone or other camera to analyze a person's surroundings, then displays the real-time video feed with additional imagery superimposed. Mixed reality is similar to augmented reality, blending real and virtual worlds together. It supports more interaction between the real and virtual worlds than augmented reality. For example, an augmented reality game may show a virtual character standing in a real environment. In a mixed reality game, that character could interact with its surroundings by hiding behind a real object. As new kinds of immersive technologies emerge, they will exist under the umbrella term of extended reality. For example, current XR technologies only utilize two human senses — sight and hearing. Future technologies could include simulation for other senses like touch, taste, and smell. Updated January 16, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/extended_reality Copy

Extensible

Extensible Extensible is an IT term used to describe something that can be extended or expanded from its initial state. It typically refers to software, such as a program or file format, though it can also be used to describe a programming language itself. An extensible software program, for example, might support add-ons or plug-ins that add extra functionality to the program. It may also allow you to add custom functions or macros that perform specialized tasks within the application. An extensible file format (like XML) can be customized with user-defined elements. If a programming language is extensible, it may support custom syntax and operations. These custom elements can be defined in the source code and are recognized by the compiler along with the pre-defined elements. Examples of extensible programming languages include Ruby, Lua, and XL. Scalable vs. Extensible While the terms scalable and extensible are sometimes used interchangeably, they are not the same thing. Scalability can refer to hardware, software, or an entire IT system, such as a cloud-based service. Extensibility, on the other hand, is almost always used to describe software and refers specifically to its extendable capabilities. For instance, a software program that supports plugins is extensible, but not necessarily scalable. A server rack that has several empty slots for future use may be considered scalable, but is not extensible. Updated October 31, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/extensible Copy

External Hard Drive

External Hard Drive An external hard drive is a hard drive in a separate enclosure that can connect to a computer to store and transfer data. They provide extra data storage space when internal drives are nearly full, serve as destinations for system backups, and help transfer large amounts of data between computers. External hard drives have used multiple interfaces over the years — SCSI, USB 2.0, Firewire, External SATA, and USB 3.0. Most external drives now use USB 3.2 or Thunderbolt connections, which offer significantly faster data transfer speeds than older standards. USB 3.2 can transfer data up to 20 Gbps, and Thunderbolt can transfer data up to 40 Gbps. External hard drives are an important part of a digital backup plan. One can be kept in a safe location between backups in case the computer is damaged or lost. Using the drive to restore data or perform another backup is as simple as connecting it to the computer, and either copying the necessary files from one drive to another, or using a backup utility to automate the process. NOTE: While many high-capacity external drives use hard disk drives, external solid-state drives (SSDs) are increasingly common. Since they lack moving parts, they're less susceptible to damage by being carried in a laptop bag. Updated January 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/external_hard_drive Copy

Extranet

Extranet An extranet is a private intranet (or internal network) that is partially accessible over the Internet. It allows for limited and controlled access to resources on an intranet to users outside of a business or organization. Access to an extranet typically happens by visiting a web portal in a browser, and provides secure remote access to confidential information and intranet-specific applications. Extranets use common networking technologies to provide outside users access to an intranet. An extranet may require simple authentication using usernames and passwords, or it may employ more secure multi-factor authentication methods that use mobile devices or password-generating hard tokens. Once the user is signed in, data transfers are typically encrypted using TLS; some extranets may even require the use of a VPN to provide extra security. Extranets often use firewalls to restrict incoming and outgoing network traffic, and intrusion detection and prevention systems (IDPS) watch for suspicious activity like repeated failed login attempts, blocking connections when necessary. Extranets provide access to an intranet to remote users, protected by firewalls and other protection methods The use of extranets is not as widespread as it once was, thanks to the increasing popularity of SaaS and cloud computing platforms that move resources and applications off of intranets and into the cloud. However, extranets still have their uses and are common in certain industries. Customer portals are a common form of extranet, allowing customers to view their order history, track shipments, and interact with customer support. They're also common in supply chain management, helping suppliers and distributors communicate the status of orders and inventory. Finally, companies involved in joint ventures (like a construction company and an architecture firm working on a building) can use an extranet to share confidential information and track the status of their projects. Updated November 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/extranet Copy

Facebook

Facebook Facebook is a social networking website that was originally designed for college students, but is now open to anyone 13 years of age or older. Facebook users can create and customize their own profiles with photos, videos, and information about themselves. Friends can browse the profiles of other friends and write messages on their pages. Each Facebook profile has a "wall," where friends can post comments. Since the wall is viewable by all the user's friends, wall postings are basically a public conversation. Therefore, it is usually best not to write personal messages on your friends' walls. Instead, you can send a person a private message, which will show up in his or her private Inbox, similar to an e-mail message. Facebook allows each user to set privacy settings, which by default are pretty strict. For example, if you have not added a certain person as a friend, that person will not be able to view your profile. However, you can adjust the privacy settings to allow users within your network (such as your college or the area you live) to view part or all of your profile. You can also create a "limited profile," which allows you to hide certain parts of your profile from a list of users that you select. If you don't want certain friends to be able to view your full profile, you can add them to your "limited profile" list. Another feature of Facebook, which makes it different from MySpace, is the ability to add applications to your profile. Facebook applications are small programs developed specifically for Facebook profiles. Some examples include SuperPoke (which extends Facebook's "poke" function) and FunWall (which builds on the basic "wall" feature). Other applications are informational, such as news feeds and weather forecasts. There are also hundreds of video game applications that allow users to play small video games, such as Jetman or Tetris within their profiles. Since most game applications save high scores, friends can compete against each other or against millions of other Facebook users. Facebook provides an easy way for friends to keep in touch and for individuals to have a presence on the Web without needing to build a website. Since Facebook makes it easy to upload pictures and videos, nearly anyone can publish a multimedia profile. Of course, if you are a Facebook member or decide to sign up one day, remember to use discretion in what you publish or what you post on other user's pages. After all, your information is only as public as you choose to make it! Updated January 14, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/facebook Copy

Factory Reset

Factory Reset A factory reset restores an electronic device's software to a fresh state, similar to when it left the factory. It removes all user data, and the software operates as a new device. Many devices have a factory reset option, including computers, tablets, smartphones, smart TVs, AVRs, and smart appliances. The top two reasons for performing a factory reset are: Fixing a persistent software issue Clearing a device before transferring it to another user Performing a factory reset will erase all your data. Make sure you have a recent backup or have transferred your data to another device before selecting this action. The factory reset command is usually found within the Settings → General menu of a device's user interface. In most cases, the command is not labeled "factory reset," but something similar. Below are examples of verbiage used by popular operating systems. Windows: Reset PC macOS: Erase All Content and Settings Chrome OS: Powerwash Android: Erase all data (factory reset) iOS: Erase all Content and Settings Factory Reset vs. System Restore The terms "factory reset" and "system restore" are often used interchangeably and may refer to the same operation. However, a system restore may reference a backup or system snapshot to restore user data instead of resetting a device to factory settings. Therefore, a factory reset is best when discarding or transferring a device to another user. NOTE: Usually, a factory reset maintains the most recently installed version of the operating system. However, if the device is restored from firmware, performing a factory reset may revert the system software to the original operating system. Updated July 6, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/factory_reset Copy

FAQ

FAQ Stands for "Frequently Asked Questions." An FAQ, pronounced "F-A-Q," is a list of answers to common questions about a specific product or service. In the IT world, FAQs are created for software programs, computer hardware, websites, and online services. They serve as a central reference for locating answers to common questions. Since FAQs are based on user feedback, they typically evolve over time. For example, a software company may receive a large number of emails regarding a specific step in their software installer. The company may clarify the step in their FAQ so that users can find the answer without needing to email the company. This cuts down on technical support, saving time for both the software company and the end users. Some software programs and hardware devices come with an FAQ document. In some cases, the FAQ is contained in the "readme" file, though it may also be a separate file or included within a printed manual. Most often, FAQs are located on a website. This allows the respective company or organization to regularly update the FAQ based on users' questions. Most FAQs are located within the "Support" section of a website. Updated November 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/faq Copy

FAT32

FAT32 Stands for "File Allocation Table 32-bit." FAT32 is a 32-bit version of the FAT file system. Formerly the default file system on Windows, it is now most commonly used on small removable flash drives due to its wide cross-platform support on computers, game consoles, and other devices. It supports a maximum individual file size of 4 GB, and a maximum volume size of 2 TB. A volume formatted using a FAT file system is broken up into uniformly-sized clusters. Each volume includes an index table, called the File Allocation Table, that records every file on the disk and which clusters that file occupies. This table is updated whenever a file is created, edited, or deleted. Even if a file becomes fragmented (scattered across nonadjacent clusters), the File Allocation Table can keep track of it. FAT file systems regularly keep two copies of the File Allocation Table in case one of them becomes damaged. Each table entry in a FAT32 volume uses 32 bits for the cluster address instead of the 16 bits used in a FAT16 volume. A FAT32 volume can contain far more clusters than FAT16 (268,435,456 instead of 65,526), meaning that a disk can be divided into smaller clusters. Since a cluster can only be associated with one file, smaller clusters are a more efficient use of disk space. Nearly all operating systems support FAT32, making it a common cross-platform file system for small removable drives. However, it has a limited feature set compared to more modern file systems like NTFS (used by Windows) or APFS (used by macOS). It lacks journaling, permissions, and other security features. A successor file system, exFAT, maintains the same cross-platform support as FAT32 while increasing the maximum file and volume size to 128 PB (or more than 128 million GB). Updated October 25, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/fat32 Copy

Favicon

Favicon A favicon is a small icon that identifies a website in a web browser. Most browsers display a website's favicon in the left side of the address bar, next to the URL. Some browsers may also display the favicon in the browser tab, next to the page title. Favicons are automatically saved along with bookmarks or "favorites" as well. Favicons have been around since the early 2000s and are supported by all major web browsers. However, different browsers provide different implementations of the favicon. For example, Firefox, Internet Explorer, and Safari all display favicons in the address bar, but Google Chrome only displays them in the page tabs. Most browsers support favicons saved as .GIF, .PNG, or .JPG files, but Internet Explorer only displays favicons saved in the .ICO format. The standard way to implement a favicon on a website is to upload a small 16x16 pixel image named favicon.ico to the root directory of the website. When a user loads a page from the website in a web browser, the browser looks for the favicon.ico file, and if it finds one saved in a supported format, it automatically displays the icon next to the URL or the page title. The favicon can also be specified in the HTML of a webpage as follows: The HTML method typically overrides the favicon saved in the root directory, which can be useful if you want to display a custom favicon for certain pages within a website. NOTE: The standard size of a favicon is 16x16 pixels, though most browsers will recognize favicons saved as 32x32, 48x48, and 64x64 pixel images as well. Favicons larger than 16x16 pixels are typically scaled down to 16x16 so that they display in the browser correctly. However, browsers that support retina displays will display 32x32 pixel icons in their native resolution. Updated September 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/favicon Copy

Favorites

Favorites While most Web browsers store saved webpage locations as bookmarks, Internet Explorer saves them as favorites. For example, when you save a webpage location in Firefox, it gets stored as a bookmark. When you save one in Internet Explorer, it gets stored as a favorite. For this reason, the terms "bookmarks" and "favorites" are often used synonymously. Favorites are also used in other applications besides Web browsers. For example, media players often include a favorites list, which allows users to store references to favorite audio and video files in a single location. Media editing programs often include a favorites panel, which contains links to files that can be imported into projects. Mac OS X has a "Favorites" folder in which users can store aliases to frequently accessed files and folders. Windows 7 also has a "Favorites" folder, which is used to store both favorite webpages and favorite files. You can often identify a Favorites folder by a star or heart icon. Most applications allow you to simply drag items into the Favorites folder to add them to your favorites. While "favorites" may refer to a wide variety of items, the purpose of a favorites folder is always to provide easy access to frequently used items. Updated September 8, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/favorites Copy

FDDI

FDDI Stands for "Fiber Distributed Data Interface." FDDI is a group of networking specifications standardized by ANSI in the mid-1980s. An FDDI network supports data transfer speeds of 100 Mbps over a fiber optic cable and uses a rotating token to define which system can send data at any given time. FDDI networks are comprised of two physical paths, or "rings," that transfer data in opposite directions. The primary ring carries data between systems, while the secondary ring is used for redundancy. If a system on the network causes an interruption in the primary data path, the secondary ring is used until the primary ring is functional again. A variation of FDDI, called FDDI Full Duplex Technology (FFDT), uses the secondary ring as an additional primary channel. This type of FDDI network has no redundancy, but supports data transfer rates up to 200 Mbps. FDDI was designed in the 1980s to provide faster networking than the 10 Mbps Ethernet and 16 Mbps token ring standards available at the time. Because of its high bandwidth, FDDI became a popular choice for high-speed backbones used by universities and businesses. While FDDI was the fastest LAN technology for several years, it was eventually superseded by Fast Ethernet, which offered 100 Mbps speeds at a much lower cost. Today, many networks use Gigabit Ethernet, which supports speeds up to 1,000 Mbps. Updated July 12, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fddi Copy

Fiber Optic Cable

Fiber Optic Cable Fiber optic cable is a high-speed data transmission medium. It contains tiny glass or plastic filaments that carry light beams. Digital data is transmitted through the cable via rapid pulses of light. The receiving end of a fiber optic transmission translates the light pulses into binary values, which can be read by a computer. Because fiber optic cables transmit data via light waves, they can transfer information at the speed of light. Not surprisingly, fiber optic cables provide the fastest data transfer rates of any data transmission medium. They are also less susceptible to noise and interference compared to copper wires or telephone lines. However, fiber optic cables are more fragile than their metallic counterparts and therefore require more protective shielding. While copper wires can be spliced and mended as many times as needed, broken fiber optic cables often need to be replaced. Since fiber optic cables provide fast transfer speeds and large bandwidth, they are used for a large part of the Internet backbone. For example, most transatlantic telecommunications cables between the U.S. and Europe are fiber optic. In recent years, fiber optic technology has become increasingly popular for local Internet connections as well. For example, some ISPs now offer "fiber Internet," which provides Internet access via a fiber optic line. Fiber connections can provide homes and businesses with data transfer speeds of 1 Gbps. Updated December 5, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fiber_optic_cable Copy

Field

Field A field is a user interface element designed for entering data. Many software applications include text fields that allow you to provide input using your keyboard or touchscreen. Websites often include form fields, which you can use to enter and submit information. In software programs, the terms "field" and "text box" may be used interchangeably. For example, a world processor may provide several formatting options, such as font size, line spacing, and page margins. Each option includes a text box where you can manually enter custom settings. Many applications also include a search field (or "search box") that allows you to search the contents of one or more documents. When you visit a website, it may provide a form that allows you to enter data, such as your billing address or registration information. Each single-line text box within a web form is called an "input" field and is defined by in HTML. Fields with more than one line are called text areas and are created using the tag. You may also encounter login forms that include two fields for entering your username and password. Most password fields are defined as which hides the characters as you type. NOTE: Databases also include fields. Each row or "record" in a database table may include multiple fields. The table columns define what fields are available in each row. Therefore, a specific column and row combination (such as Row: 101, Column: Name) defines a specific field. Individual fields in a database may be searched and modified using standard SQL queries. Updated August 1, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/field Copy

FIFO

FIFO Stands for "First In, First Out." FIFO is a method for organizing, processing or retrieving data or other objects in a queue. In a FIFO system, the data that has been waiting the longest gets processed first whenever there is an opening. New objects are added to the back of the queue and must wait their turn as the system processes each object in order. The FIFO model is one of the most basic ways to process data. It does not allow one object to jump the line or weigh the priority of multiple items when choosing what to process next — everything waits its turn without exception. The opposite of the FIFO model is the LIFO model, or last-in-first-out, where the newest entry is processed first. A printer queue uses First In, First Out to schedule the oldest print jobs first Many computer queues operate using a FIFO model. For example, a network printer in a busy office will use a FIFO queue to schedule print jobs as they come in, even if your document is only two pages and the job right before yours is a hundred. Computers also typically use FIFO scheduling when pulling data from an array or buffer. Disk write scheduling, process scheduling, and message systems also often use a FIFO model to handle requests in order without consideration for high or low priority. Updated July 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/fifo Copy

File

File A file is a collection of digital data stored as a single object on a disk. The type of data stored in a file depends on its file type, typically consisting of text, images, multimedia audio and video streams, data libraries, executable software, or other information. A file can be identified by its filename, extension, and location in the disk's file system. Every file on your computer starts with a header — a special string that tells software programs and the computer's operating system how its contents are encoded, along with metadata like the file's author, created date, and total file size. For example, a JPEG image's header starts with a string that identifies it as a JPEG image, then includes other information like its dimensions, color information, and compression settings. Next comes the file's contents, encoded as either human-readable plain text (using ASCII, UTF-8, or another text encoding method) or as binary data. Finally, a file ends with either an end-of-file marker (another special string like the file header) or by reaching the end of the file size specified in the header. Several files in the macOS Finder, with a dialog box displaying one file's extended information When a program opens a file, it copies the file from the storage disk to the system memory (RAM), where it can read and modify data much faster. Any changes to a file made while it is open won't be committed to the version on the disk until you save it. You can also move or copy a file to a new folder or another disk entirely; you can also transfer a file to another location over a network connection or the Internet. If you don't need a file anymore, you can delete it to free up its disk space. There are countless file types that each store data in a file in their own way. The most common way to identify a file's type is by looking at its file extension, which is found at the end of the file name and is typically three or four letters after a period (.). For example, plain text files use the extension .TXT, while a file with the .PNG extension is a PNG image. Updated September 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/file Copy

File Association

File Association A file association is a relationship between a file type and a supporting application. For example, a Word document may be associated with Microsoft Word. This means when you double-click a Word document, Microsoft Word will open the file. Both the Windows and Mac OS X operating systems use file associations to define what default program is used to open each file type. For example, in Windows, plain text (.TXT) files are commonly associated with Microsoft Notepad. In Mac OS X, they are associated with Apple TextEdit. Therefore, if you double-click a plain text file in Windows, it will open in Notepad, while in Mac OS X, it will open in TextEdit. Since some file types have multiple file extensions, file associations relate each file extension to a specific program. For example, a file association for the HTML file type may require both .HTM and .HTML files to be associated with a specific Web browser. The file association for JPEG image files may require .JPG, .JPE, and .JPEG files to be associated with a specific image viewer. In Windows, file associations are defined in the registry, while in Mac OS X, they are listed in the LaunchServices preferences. Fortunately, if you want to change a file association on your computer, you don't need to access either of these locations. Instead, both Windows and Mac OS X provide a simple user interface for modifying file associations. For instructions on changing file associations on Windows and Macintosh systems, visit the pages below. Windows XP Windows 7 Mac OS X NOTE: While you can manually change file associations using the methods above, some applications can also change file associations for you. For example, when you install or open a program, it may ask you if you want to make it the default program for all supported file types. If you select "Yes," the program will assign all supported file types to itself. Updated June 25, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_association Copy

File Compression

File Compression File compression is used to reduce the file size of one or more files. When a file or a group of files is compressed, the resulting "archive" often takes up 50% to 90% less disk space than the original file(s). Common types of file compression include Zip, Gzip, RAR, StuffIt, and 7z compression. Each one of these compression methods uses a unique algorithm to compress the data. So how does a file compression utility actually compress data? While each compression algorithm is different, they all work in a similar fashion. The goal is to remove redundant data in each file by replacing common patterns with smaller variables. For example, words in a plain text document might get replaced with numbers or another type of short identifier. These identifiers then reference the original words that are saved in a key within the compressed file. For instance, the word "computer" may be replaced with the number 5, which takes up much less space than the word "computer." The more times the word "computer" is found in the text document, the more effective the compression will be. While file compression works well with text files, binary files can also be compressed. By locating repeated binary patterns, a compression algorithm can significantly reduce the size of binary files, such as applications and disk images. However, once a file is compressed, it must be decompressed in order to be used. Therefore, if you download or receive a compressed file, you will need to use a file decompression program, such as WinZip or StuffIt Expander, to decompress the file before you can view the original contents. Related file extensions: .ZIP, .GZ, .RAR, .SITX, .7Z. Updated April 8, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_compression Copy

File Extension

File Extension A file extension (or simply "extension") is the suffix at the end of a filename that indicates what type of file it is. For example, in the filename "myreport.txt," the .TXT is the file extension. It indicates the file is a text document. Some other examples include .DOCX, which is used for Microsoft Word documents, and .PSD, which is the standard file extension for Photoshop documents. While most file extensions are three characters in length, they can be as short as one character or longer than twenty characters. Sometimes long file extensions are used to more clearly identify the file type. For example, the .TAX2015 file extension is used to identify TurboTax 2015 tax returns and the .DESKTHEMEPACK extension identifies Windows 8 desktop themes. The file extension determines which program is used to open the file as well as what icon should be displayed for the file. It also helps you see what kind of file a certain document is by just looking at the filename. Both Windows and Mac OS X allow you to manually change file extensions, which may also change the program the computer uses to open the file. While this might work for some files, it may also cause the file to not open at all. For example, if you change a file with a ".txt" extension to a ".doc" extension, Microsoft Word may still open it. However, if you change a ".txt" file to a ".psd" file, Photoshop will not recognize or open the file. Since there are tens of thousands of file types, there are also tens of thousands of file extensions. While it may not be possible to remember all of them, it is helpful to learn some of the more common ones, such as .JPG, .GIF, .MP3, .ZIP, .HTML, and others. For a list of common file extensions and their associated file types, visit FileInfo.com's Common File Types page. Updated November 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_extension Copy

File Format

File Format A file format defines the structure and type of data stored in a file. The structure of a typical file may include a header, metadata, saved content, and an end-of-file (EOF) marker. The data stored in the file depends on the purpose of the file format. Some files, such as XML files, are used to store lists of items, while others, such as JPEG image files simply contain a block of data. A file format also defines whether the data is stored in a plain text or binary format. Plain text files can be opened and viewed in a standard text editor. While text-based files are easy to create, they often use up more space than comparable binary files. They also lack security, since the contents can be easily viewed by dragging the file to a text editor. Binary file formats can be compressed and are well-suited for storing graphics, audio, and video data. If you attempt to view a binary file in a text editor, most of the data will appear garbled and unintelligible, but you may see some header text that identifies the file's contents. Some file formats are proprietary, while others are universal, or open formats. Proprietary file formats can only be opened by one or more related programs. For example, a compressed StuffIt X (.SITX) archive can only be opened by StuffIt Deluxe or StuffIt Expander. If you try to open a StuffIt X archive with WinZip or another file decompression tool, the file will not be recognized. Conversely, open file formats are publicly available and are recognized by multiple programs. For example, StuffIt Deluxe can also save compressed archives in a standard zipped (.ZIP) format, which can be opened by nearly all decompression utilities. When software developers create applications that save files, choosing an appropriate file format is important. For some programs, it might make sense to use an open format, which is compatible with other applications. In other cases, using a proprietary format may give the developer a competitive advantage, since the files created with the program can only be opened with the developer's software. However, most people prefer to have multiple software options, so many developers have moved away from proprietary file formats and now use open formats instead. For example, Microsoft Word, which used to save word processing documents in the proprietary .DOC format now saves documents in the open .DOCX format, which is supported by multiple applications. NOTE: While the term "file format" technically refers to the structure and content of a file, the term is also used interchangeably with "file type," which defines a specific type of file, such as a rich text file or a Photoshop document. Updated March 15, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_format Copy

File Server

File Server As the name implies, a file server is a server that provides access to files. It acts as a central file storage location that can be accessed by multiple systems. File servers are commonly found in enterprise settings, such as company networks, but they are also used in schools, small organizations, and even home networks. A file server may be a dedicated system, such as network attached storage (NAS) device, or it may simply be a computer that hosts shared files. Dedicated file servers are typically used for enterprise applications, since they provide faster data access and offer more storage capacity than non-dedicated systems. In home networks, personal computers are often used as file servers. However, personal NAS devices are also available for home users that require more storage capacity and faster performance than a non-dedicated file server would allow. File servers can be configured in multiple ways. For example, in a home setting, a file server may be set to automatically allow access to all computers on the local network (LAN). In a business setting where security is important, a file server may require all client systems to log in before accessing the server. Others may only grant access to a specific list of machines, which can be defined by MAC address or IP address. Internet file servers, which provide access to files over the Internet, often require an FTP login before users can download files. NOTE: When you connect to a file server on a local network, it usually appears as a hard disk on your computer. You can double-click the hard disk icon to view the contents and browse through directories on the server, just like local folders. If you want to copy a file from the server to your computer, simply drag the file to your desktop or another folder on your local disk. If the file server has write permissions enabled, you can also copy local files to the server by dragging them to a directory on the server. When you copy files to or from the file server, it may appear that they are simply being transferred from one local folder to another. However, the files are actually being transferred across the network. Updated May 27, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_server Copy

File System

File System A file system is a method of organizing files on a computer's storage volume. File systems keep track of every file on a volume, recording its name, file type, and location so that a computer's operating system can find it later. A volume needs a file system created on it by a computer's operating system before it can store any data, whether it is on a hard disk drive, solid-state drive, or external flash drive. A computer's operating system creates a file system when it initializes or formats a volume. A file system includes a database-like index to manage a folder hierarchy, beginning with a root directory and allowing you to create folders and subfolders. File systems uniquely identify files on a volume using a combination of file name and location, so two files with identical names can exist on the same volume as long as they are in different folders. File systems support folders and nested subfolders to help organize files File systems also save metadata about files and folders. Most file systems record when a file was created and edited using timestamps. File systems on multi-user operating systems also keep track of which user has ownership of files or folders, as well as any other users that have permission to view and edit them. Some file systems can encrypt files, folders, or entire volumes to keep them secure. Different operating systems each use their own default file system. Windows uses NTFS, macOS uses APFS, and most Unix-like operating systems use the ext4 file system. Other file systems with cross-platform support, like exFAT and FAT32, are often used on storage devices that transfer files between computers. Updated February 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/file_system Copy

File Type

File Type A file type is a name given to a specific kind of file. For example, a Microsoft Word document and an Adobe Photoshop document are two different file types. While these file types are associated with individual applications, other file types, such as rich text RTF files and MP3 audio files are standard file types that can be opened by multiple programs. The terms "file type" and "file format" are often used interchangeably. However, a file format technically describes the structure and content of a file. For example, the file type of an image file saved using JPEG compression may be defined as a "JPEG image file." The file format might be described as a binary file that contains a file header, metadata, and compressed bitmap image data. Each file type has one or more corresponding file extensions. For example, JPEG image files may be saved with a .JPG or .JPEG extension, while Adobe Photoshop image files are saved with a .PSD extension. The file extension appended to the end of each filename provides a simple way of identifying the file type of each file. File extensions are also used by the operating system to associate file types with specific programs. The relationships between file types and programs are called file associations and define what program opens each file type by default. You can view a list of common file types at FileInfo.com. Updated March 15, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/file_type Copy

Filename

Filename A filename is a text string that identifies a file. Every file stored on a computer's hard disk has a filename that helps identify the file within a given folder. Therefore, each file within a specific folder must have a different filename, while files in different folders can have the same name. Filenames may contain letters, numbers, and other characters. Depending on the operating system, certain characters cannot be used since they conflict with operators or other syntax used by the operating system. Different operating systems also have different limits for the number of characters a filename can have. While older operating systems limited filenames to only 8 or 16 characters, newer OS's allow filenames to be as long as 256 characters. Of course, for most practical purposes, 16 characters is usually enough. Filenames also usually include a file extension, which identifies the type of file. The file extension is also called the "filename suffix" since it is appended to the filename, following a dot or period. For example, a Microsoft Word document may be named "document1.doc." While technically the filename in the preceding example is "document1" and "doc" is the extension, it is also acceptable to refer to "document1.doc" as the filename. In some cases, the filename may even refer to the file's directory location, i.e. ("My DocumentsSchool Papersdocument1.doc"). You can name a file by clicking on the file's icon or filename, waiting for a second, then clicking on the filename again. As long as the file is not locked, the filename will become highlighted, and you can type a new name for the file. You can also name a file the first time you save it from a program or by selecting "Save As..." from the program's File menu. Updated October 19, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/filename Copy

FILO

FILO Stands for "First In, Last Out." FILO is an acronym used in computer science to describe the order in which objects are accessed. It is synonymous with LIFO (less commonly used) and may also be called LCFS or "last come, first served." A stack is a typical data structure that may be accessed using the LIFO method. In a stack, each item is placed on top of the previous item, one at a time. Items can be removed from either the top of the stack (FILO) or from the bottom of the stack FIFO. You can imagine a FILO stack as the paper in a printer tray. Whatever paper you place on top of the existing paper in the input tray will be accessed first. FILO is not necessarily a "fair" way to access data, since it operates in opposite order of a queue. Still, the FILO method can be useful for retrieving recently used objects, such as those stored in cache memory. Updated August 7, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/filo Copy

Filter

Filter In computing, the term "filter" has several different meanings. Examples include 1) search filters, 2) email filters, and 3) multimedia filters. 1. Search Filter Many search engines allow you to filter search results. Search filtering allows you to limit the results to a reduced set that matches the filter(s) you select. For example, Google provides search filtering options in the "Tools" menu at the top of the search results page. You can filter the results to only display webpages that match your query verbatim (exactly as you typed it). You can filter image results by size, color, file type, and usage rights. Many websites' internal search engines allow you to filter results as well. For example, e-commerce sites like Amazon and Best Buy provide filtering options when you perform a search. These options are typically found in the left sidebar of the desktop search results page. By checking different filters, you can fine-tune the results by brand, price ranges, style, rating, etc. 2. Email Filter An email filter may refer to a spam filter or a custom filter created by a user. Spam filtering is often handled by the mail server before messages even reach a user's account. However, several mail clients allow you to create custom junk filters to further reduce the spam, or unwanted mail, that shows up in your inbox. You can also create email filters or "rules" that automatically filter messages to specific folders within your email account. If you get a lot of messages about a particular topic, you can create a rule to automatically filter the messages to a specific folder. This can make it easier to organize your messages. 3. Multimedia Filter A filter may also refer to a digital effect added to an image, video clip, or audio track. For example, Photoshop includes a "Filter" option in the menu bar that contains several image filters, such as blur, distort, pixelate, and sharpen. Other image filters can be used to add effects such as vibrant colors or an antique character to an image. Instagram, for instance, provides dozens of filters for modifying the appearance of photos. Filters are so ubiquitous on Instagram, users sometimes use the hashtag #nofilter when captioning photos without a filter. Filters may also be applied to audio and video clips. For example, an audio filter can add extra reverb or chorus to a track. An autotune filter can be used to correct the pitch of a singer's voice to a specific key. Video filters can be used to add a specific visual style to a movie, such as black and white film or cinematic color tone. All multimedia filters are simply algorithms that are applied to digital media to create a desired effect. Updated May 1, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/filter Copy

Finder

Finder The Finder is the desktop interface of Macintosh computers. It allows users to move, copy, delete, and open files, navigate through folders, and move windows around the desktop. The Finder loads automatically when the computer starts up and is always running in the background. While the Finder is a core component of the Mac OS, it is technically an application. This means users can select the Finder from a list of active applications, either using the Dock or the Command-Tab shortcut. For example, a user may switch to the Finder so he can open a window and browse to a specific file he wants to open. Once the user finds the file, he can open it by double-clicking the icon. Since the Finder is always running in the background, it cannot be quit like other applications. Instead, the Finder can only be relaunched, which simply restarts the application. The Finder has been part of the Mac OS GUI since its inception and has gradually evolved throughout the years. It has always included a menu bar, desktop, icons, and windows, but now has several additional features as well. For example, the current Finder in Mac OS X includes the Dock for easy access to applications and files, an advanced search feature, and Quick Look technology, which allows many types of documents to be viewed directly in the Finder. The Finder windows now include a sidebar, with shortcuts to disks and folders, and support several viewing options, including Cover Flow, which allows users to flip through previews of documents. The Finder is a fundamental part of the Macintosh operating system and serves as the primary interface between the user and the Mac. Therefore, if you use a Mac, it may be a good idea to browse through the Finder's menu options and familiarize yourself with all the features the Finder has to offer. Updated December 2, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/finder Copy

Fintech

Fintech Fintech combines the words "finance" and "technology" into a single term. It describes any industry, product, or service that uses financial technology. The term is sometimes used in contrast with other tech categories, such as biotech and edtech. Just like other businesses, financial companies must evolve and adapt over time. For example, if a bank does not offer online banking in 2021, it is unlikely to attract any customers. Therefore, fintech is a necessary component of any financial institution. Below is a list of different types of fintech a bank might employ. Mobile apps Mobile check scanning and deposits Online banking Electronic statements Automatic investing Account syncing with financial apps Fintech also includes the underlying technologies that enable or simplify financial transactions. Examples include: Credit cards Chip readers Contactless payments (Apple Pay and Google Pay) NFC and Bluetooth wireless technologies Direct desposit and ACH payments Data encryption In recent years, new financial technologies have emerged, providing new ways to save, spend, and invest money. Some examples are: Blockchain Bitcoin Square payments Shopify e-commerce stores Robinhood investing Because security and accuracy are critical to financial transactions, most fintech products and services incorporate secure technologies, such as encryption, TLS, HTTPS, and others. "Regtech," a subset of fintech, includes products and services used to regulate financial technologies. Updated March 25, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fintech Copy

Fios

Fios Stands for "Fiber Optic Service." Fios is a telecommunications network owned by Verizon that uses fiber optic cables to transfer data. It is considered a "Fiber to the Premises," or FTTP service, since it brings fiber optic data transmission to residential homes as well as businesses. Fios supports data transfer rates of 940 Mbps downstream and 880 Mbps upstream. Services include Fios Internet, Fios TV, and Fios Digital Voice. Fios Internet is a high-speed broadband connection to the Internet where Verizon serves as the ISP. Fios TV is high-definition television service that provides over 400 channels, similar to cable television. Fios Digital Voice is similar to a traditional telephone service. It provides a phone number and supports both domestic and international calls. Since Fios uses 100% fiber optic cables, it is one of the fastest Internet services available. It is also known for its high reliability. However, Fios is only available in specific areas of the United States that are connected to Verizon's fiber optic network. To find out if you can get Fios where you live, check Verizon's Fios Availability. Updated August 24, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fios Copy

Firewall

Firewall A physical firewall is a wall made of brick, steel, or other inflammable material that prevents the spread of a fire in a building. In computing, a firewall serves a similar purpose. It acts as a barrier between a trusted system or network and outside connections, such as the Internet. However, a computer firewall is more of a filter than a wall, allowing trusted data to flow through it. A firewall can be created using either hardware or software. Many businesses and organizations protect their internal networks using hardware firewalls. A single or double firewall may be used to create a demilitarized zone (DMZ), which prevents untrusted data from ever reaching the LAN. Software firewalls are more common for individual users and can be custom configured via a software interface. Both Windows and OS X include built-in firewalls, but more advanced firewall utilities can be installed with Internet security software. Firewalls can be configured in several different ways. For example, a basic firewall may allow traffic from all IP addresses except those flagged in a blacklist. A more secure firewall might only allow traffic from systems or IP addresses listed in a whitelist. Most firewalls use a combination of rules to filter traffic, such as blocking known threats while allowing incoming traffic from trusted sources. A firewall can also restrict outgoing traffic to prevent spam or hacking attempts. Network administrators often custom configure hardware and software firewalls. While custom settings may be important for a company network, software firewalls designed for consumers typically include basic default settings that are sufficient for most users. For example, in OS X, simply setting the firewall to "On" in the "Security & Privacy" System Preference prevents unauthorized applications and services from accepting incoming connections. Some firewalls even "learn" over time and dynamically develop their own filtering rules. This helps them become more adept at blocking unwanted connections without any manual customization. Updated December 18, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/firewall Copy

Firewire

Firewire FireWire is an I/O interface developed by Apple Computer. It is also known as IEEE 1394, which is the technical name standardized by the IEEE. Other names for IEEE 1394 include Sony i.Link and Yamaha mLAN, but Apple's FireWire name the most commonly used. There are two primary versions of the FireWire interface – FireWire 400 (IEEE 1394a) and FireWire 800 (IEEE 1394b). FireWire 400 uses a 6-pin connector and supports data transfer rates of up to 400 Mbps. FireWire 800 uses a 9-pin connector and can transfer data at up to 800 Mbps. The FireWire 800 interface, which was introduced on Macintosh computers in 2003, is backwards compatible with FireWire 400 devices using an adapter. Both interfaces support daisy chaining and can provide up to 30 volts of power to connected devices. FireWire is considered a high-speed interface, and therefore can be used for connecting peripheral devices that require fast data transfer speeds. Examples include external hard drives, video cameras, and audio interfaces. On Macintosh computers, FireWire can be used to boot a computer in target disk mode, which allows the hard drive to show up as an external drive on another computer. Mac OS X also supports networking two computers via a FireWire cable. While FireWire has never been as popular as USB, it has remained a popular choice for audio and video professionals. Since FireWire supports speeds up to 800 Mbps, it is faster than USB 2.0, which maxes out at 480 Mbps. In fact, even FireWire 400 provides faster sustained read and write speeds than USB 2.0, which is important for recording audio and video in real-time. Future versions of IEEE 1394, such as FireWire 1600 and 3200, were designed to support even faster data transfer speeds. However, the FireWire interface has been superseded by Thunderbolt, which can transfer data at up to 10,000 Mbps (10 Gbps) and is backwards compatible with multiple interfaces. Updated January 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/firewire Copy

Firmware

Firmware Firmware is a software program, or set of instructions, programmed onto a hardware device. It provides the necessary instructions that the device needs to communicate with other hardware. A device stores its firmware in a ROM chip — typically flash memory, which can be updated and rewritten to update a device's firmware. Firmware is "semi-permanent" since it remains the same until modified by a firmware updater. Device makers occasionally release firmware updates to fix bugs, improve performance, or introduce compatibility with new devices. For example, your network router may occasionally receive a firmware update to fix a security problem, and smart home devices may receive updates to resolve reliability issues. Device firmware updates come via a few different methods. More advanced devices, like routers and other network equipment, may include self-updaters that connect to the Internet, download an update, and apply it themselves. Smart home devices controlled by a mobile app can receive firmware updates through that app. Other devices, like computer game controllers, require you to visit the manufacturer's website to download an updater program, connect the device to your computer, then run the updater. Just make sure that once you start a firmware update, it finishes before you power the device off — if a firmware update is interrupted, the device may become bricked and not function. Updated March 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/firmware Copy

Flag

Flag In computer science, a flag is a value that acts as a signal for a function or process. The value of the flag is used to determine the next step of a program. Flags are often binary flags, which contain a boolean value (true or false). However, not all flags are binary, meaning they can store a range of values. You can think of a binary flag as a small red flag that is laying flat when it is false, but pops up when it is true. A raised flag says to a program, "Stop - do something different." A common example of a flag in computer programming is a variable in a while loop. The PHP loop below will iterate until $flag is set to true. $flag = false; $i = 1; while (!$flag) // stop when $flag is true {   echo "$i, ";   $i++; // increment $i if ($i > 100) $flag = true; } The above code will print out numbers (1, 2, 3...) until 100. Then the loop will break because $flag will be set to true. Using a flag in this context is effective, but unnecessary. Instead, the while loop condition could have been while ($i < 101) instead of while (!$flag). This would produce the same result and eliminate the need for the $flag variable. Efficiently written programs rarely need explicit flags since an existing variable within a function can often be used as a flag. A binary flag only requires one bit, which can be set to 0 or 1. However, bytes have eight bits, meaning seven bits are unused when a single byte stores a binary flag. While a single byte is still a very small amount of data, a programmer may choose to use a single byte to store multiple binary flags. Non-Binary Flags Non-binary flags use multiple bits and can store more than "yes or no" or "true or false." These types of flags require more than one bit, but not necessarily a full byte. For example, two bits can produce four possible options. 00 = option A 01 = option B 10 = option C 11 = option D You can think of a non-binary flag as a flag with multiple colors. A program can check to see if 1) if the multi-bit flag is set and 2) what value it contains. Depending on the value (or "color") of the flag, the program will continue in the corresponding direction. Updated May 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flag Copy

Flaming

Flaming Flaming is the act of posting or sending offensive messages over the Internet. These messages, called "flames," may be posted within online discussion forums or newsgroups, or sent via e-mail or instant messaging programs. The most common area where flaming takes place is online discussion forums, which are also called bulletin boards. Flaming often leads to the trading of insults between members within a certain forum. This is an unfortunate result, as it often throws the discussion of a legitimate topic well off track. For example, the topic of a discussion forum may be "Choosing a Mac or a PC." Some Mac user may post a message gloating about the benefits of a Mac, which in turn prompts a response from a PC user explaining why Macs suck and why Windows is obviously the better platform. The Mac user may then post a reply saying that Mac users are, in fact, a more intelligent species who are not as naive as PC users. This kindles a more personal attack from the PC user, which incites an all out flame war. These flame wars, also called "pie fights," are not limited to only two people at a time, but may involve multiple users. This causes a swell of negatively within online discussion groups and results in little, if any, productively. Flaming is unfortunately one of the most common breaches of online netiquette. Instead of being considerate of others' viewpoints, "flamers" force their own agendas on other users. While some flaming is intentional, some is not. This is because users may misunderstand the intent of a another user's message or forum posting. For example, someone may make a sarcastic comment that is not understood as sarcastic by another user, who may take offense to the message. Using emoticons and clearly explaining one's intent can help avoid online misunderstandings. Because of the adverse effects of flaming, it is best to err on the side of humility and be courteous when posting or sending messages online. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flaming Copy

Flash

Flash Macromedia Flash (later Adobe Flash) was a multimedia web browser plug-in that enabled animations, video, and interactive content on websites. Flash was an extremely popular plug-in for more than a decade, but it eventually fell out of favor due to a combination of performance and security problems and the introduction of HTML5 and its multimedia features. Originally called FutureSplash Animator and released in 1996, Flash first became popular as a vector-based animation tool. It allowed people to easily create animations that could be played back in any web browser with the Flash plug-in installed. Later updates introduced ActionScript (an object-oriented programming language that added new levels of interactivity) and support for high-quality video streaming. By 2010 the Flash plug-in was included by default with most web browsers, and it was common to see interactive games, streaming video, and even entire websites built using Flash. However, the Flash plug-in had its drawbacks. It was very resource-intensive when content was playing, quickly draining batteries on laptop computers. The plug-in also proved to be a security risk that allowed an attack vector for malware. The lack of support for Flash on Apple's iPhone, and growing browser support for HTML5 video, ultimately caused Adobe to end support for the Flash plug-in in 2020. The animation tool itself was renamed Adobe Animate and now supports HTML5, WebGL, and SVG animation output. NOTE: "Flash" may also refer to flash memory. Erasing a flash memory cell is often called "flashing" the memory. Updated January 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/flash Copy

Flash Drive

Flash Drive A flash drive is a small, removable data storage device that uses flash memory and connects directly to a computer's USB port. A flash drive is typically very compact, only as wide as the USB port itself and an inch or two long, but is also rugged enough to travel safely in a pocket or purse. Connecting a flash drive to a computer requires no additional hardware, which makes using a flash drive simple and convenient. Early flash drives could only store a few megabytes of data, but modern flash drives are available in sizes up to a terabyte or more. Due to their portability, high capacity, and fast transfer speeds, you can use flash drives for a wide variety of uses. You can use a flash drive to transfer files from one computer to another without establishing a network connection or to keep important files with you to access from anywhere. You can back up important data to a flash drive and store it in a safe place, like a safe deposit box. Most computers can boot from flash drives, so you can use one for operating system installation files or even boot an operating system directly from a flash drive to run system diagnostics. A SanDisk flash drive with a USB Type A connector Most flash drives use the original USB Type A connector found on computers since USB's introduction in the late 1990s. However, USB Type C connectors are increasingly common on flash drives to allow them to connect to laptops and smartphones that only use the newer connector. Many flash drives even include both USB Type A and Type C connectors that you can switch between for extra versatility. NOTE: Windows and other operating systems will automatically mount flash drives that you connect to your computer, which could allow malware on a flash drive to infect your computer. You should never plug an unfamiliar flash drive into your computer if you're unsure what it contains. Updated October 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/flash_drive Copy

Flash Memory

Flash Memory Flash memory is a form of computer ROM chip used in computers, mobile devices, digital cameras, and many other electronic devices. It is non-volatile (preserving data even when powered off) and solid-state (with no moving parts). Flash memory stores firmware instructions for all sorts of electronic devices — computers and smartphones, networking equipment, cars, televisions, and even household appliances. It is also used for general data storage in SSDs, flash drives, SD cards, and embedded disks in mobile devices. Flash memory is a type of electrically-erasable programmable read-only memory or EEPROM. The name comes from how the memory erases data, clearing a section of memory cells in a single action or "flash." Flash memory was first used for small ROM chips to store a computer's BIOS settings and other firmware data, where the ability to update itself when necessary was valuable. A flash memory chip inside a USB flash drive Over time, the cost of flash memory decreased while storage capacities increased. Cheap flash memory enabled the creation of small memory cards like SD and Compact Flash, as well as small USB flash drives. It is also used in solid-state drives for computers, which can read and write data significantly faster than older hard disk drives that rely on spinning platters. Since flash memory is compact and has no moving parts, it is also ideal for storing data on laptop computers, smartphones, and tablets. Updated January 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/flash_memory Copy

Flat File

Flat File A flat file database is a database that stores its contents in a single table. Most flat file databases are formatted as plain text files — each line containing a separate record, and fields separated by delimiter characters like commas or tabs. Most flat file databases are relatively small due to their simple structure, but very large flat file databases may be used when only a single table is needed. Unlike relational databases, which contain multiple tables that link to and reference each other, flat file databases store all of their data in a single table. This makes flat file databases poorly suited for complex data structures like a store's inventory database, but still useful for storing simple information like a contacts list. Since flat file databases are plain text, their contents are limited to text and numbers. Text in a flat file database must also be formatted properly so that commas or tabs within the database do not accidentally split fields. A flat file database stores its data in a single table Since flat file databases are plain text with simple delimiter characters, they are often used as intermediate files when transferring data between two applications. Comma-separated value (CSV) files are the most common, often used by spreadsheet applications like Microsoft Excel to export data in a non-proprietary file format. They are also widely used by applications to store data internally, often for configuration data. Updated June 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/flat_file Copy

Flatbed

Flatbed A flatbed is a type of scanner or copier that uses a flat, glass surface for scanning documents or other objects. Most flatbed scanners have an adjustable lid that can be raised to allow magazines, books, and other thick objects to be scanned. This is a significant benefit over sheet-fed scanners or copiers (sometimes referred to as automatic document feeders, which can only accept paper documents. Flatbed scanners and copy machines range in size from standard letter size (8.5"x11") to legal size and beyond. For example, a scanner used to scan architectural blueprints may be the size of several letter-size scanners. Because of their large size capacity and ability to scan thick objects, flatbed scanners are more versatile than sheet-fed scanners. However, they cannot automatically feed pages into the scanner, which means scanning multiple pages can be a time-consuming process. For this reason, some scanners and copy machines include both a flatbed scanning surface for large or thick objects, and an ADF for feeding multiple pages at once. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flatbed Copy

Flexible

Flexible In the computer world, "flexible" may refer to hardware, software, or a combination of the two. It describes a device or program that can be used for multiple purposes, rather than a single function. An example of a flexible hardware device is a hybrid tablet that also functions as a laptop. The Microsoft Surface, for instance, is more flexible than typical tablet, since it can also be used as a Windows laptop. A router that can serve as a firewall for both internal and external networks might also be considered flexible. Instead of being limited to filtering traffic from the Internet, a flexible router may be used to filter traffic within a local network as well. Horizontal market software, which is used across multiple industries, is a common type of flexible software. Spreadsheet programs, for example, are flexible because they provide a range of uses, such as managing team rosters, tracking inventory, and organizing finances. A professional design program like Adobe InDesign may be considered flexible, since it can be used to create digital layouts for print, web, and electronic publications. Web browser are flexible since they serve many different purposes. Besides surfing the web, you can use a browser to check email, interact with social media sites, play games, and run various web applications. Flexible hardware and software products provide extra value since they can be used in many different ways. However, flexibility is not always important. In some cases, a device or program designed for a specific purpose can perform a task better than a multipurpose solution. Updated November 10, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flexible Copy

Floating Point

Floating Point Floating-point numbers are a data type used in computer programming that stores a number with a floating decimal point. A decimal point "floats" when its position is not fixed in place by the number format. For example, 3.145, 12.99, and 234.9876 are all floating-point numbers since the decimal point is not always in the same position. A computer stores a floating-point number by breaking it into two parts. The first is called the "significand" or "mantissa," which stores the significant digits as a whole number value without a decimal point. The second is the "exponent," which modifies the magnitude of the significand by setting the position of the decimal point. Multiplying the significand by the exponent produces the final value. Some floating-point number formats use a base-10 exponent, while others use a base-2 exponent. For example, storing the number 4.7988 as a floating-point number would use the significand "47988" and the exponent 10-4. The floating-point number "4.7988" represented by its significand and its exponent Computer programming languages store numbers without decimal points (called integers) as a separate number format. Some languages also support fixed-point numbers, which fix the decimal point in the same place for every number — for example, currency values that always have two digits after a decimal point. Mathematical calculations using integers or fixed-point numbers are less computationally expensive, so using those options when possible is more efficient than using floating-point numbers for everything. NOTE: Older computer CPUs like the Intel 80386 and the Motorola 68000 series lacked hardware support for floating-point operations, slowly performing them in software instead. These chips used a companion processor, called a floating point unit (FPU) or "math coprocessor," to take over floating-point operations and significantly improve performance. Modern processors include support for floating-point operations without a coprocessor. Updated November 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/floating_point Copy

Floppy Disk

Floppy Disk A floppy disk was a removable data storage disk commonly used in the 1980s and 1990s. They were available in multiple sizes and capacities, including 5.25-inch and 3.5-inch versions. Floppy disks contained a thin, flexible disk coated with iron oxide that stored data magnetically like a hard disk. Computers of the time usually included at least one floppy disk drive (FDD) to read and write data from floppy disks. The first floppy disks were 8 inches in diameter, read-only, and held 80 KB of data; they primarily replaced punch cards for shipping data updates to mainframes. Smaller, consumer-friendly 5.25-inch floppy disks were the first that became popular among home computer users. Early versions could store 360 KB, while later revisions could store 1.2 MB. 3.5-inch floppy disk Smaller 3.5-inch floppy disks eventually became more popular than the larger diskettes. 3.5-inch disks used a hard plastic case instead of a flexible one, making them more durable. The original Apple Macintosh included a built-in, 3.5-inch 400 KB floppy drive and helped popularize the format. A high-density version, released in 1987, could store 1.44 MB of data and became the standard for the next decade. Decline of Floppy Disks Floppy disks were the standard method for distributing software until the mid-1990s when CD-ROM drives became common. Windows 95, for example, was available in two editions — a single CD-ROM or a set of 13 floppy disks. Recordable CD-R drives allowed people to back up their data to CD instead of using a stack of unreliable floppy disks. The rise of the Internet also made it possible to transfer files online instead of delivering files on a floppy disk. The original Apple iMac, released in 1998, was the first mainstream computer to not include a floppy disk drive; within a few years, most computer manufacturers followed their lead. Dropping costs for flash memory and the increasing adoption of USB ports led to the introduction of the USB flash drive, signaling the end of the floppy disk as the most-accessible portable storage medium. NOTE: Even though the floppy disk is now obsolete, its likeness lives on as an icon representing the Save command. Updated March 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/floppy_disk Copy

FLOPS

FLOPS Stands for "Floating Point Operations Per Second." FLOPS are typically used to measure the performance of a computer's processor. While clock speed, which is measured in megahertz, is often seen as an indicator or a processor's speed, it does not define how many calculations a processor can perform per second. Therefore, FLOPS is a more raw method of measuring a processor's processing speed. Still, a FLOPS reading only measures floating point calculations and not integer operations. Therefore, while FLOPS can accurately measure a processor's floating point unit (FPU), it is not a comprehensive measurement of a processor's performance. In order to accurately gauge the processing capabilities of a CPU, multiple types of tests must be run. Updated January 22, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flops Copy

Flowchart

Flowchart A flowchart is a diagram that describes a process or operation. It includes multiple steps, which the process "flows" through from start to finish. Common uses for flowcharts include developing business plans, defining troubleshooting steps, and designing mathematical algorithms. Some flowcharts may only include a few steps, while others can be highly complex, containing hundreds of possible outcomes. Flowcharts typically use standard symbols to represent different stages or actions within the chart. For example, each step is shown within a rectangle, while each decision is displayed in a diamond. Arrows are placed between the different symbols to show the direction the process is flowing. While flowcharts can be created with a pen and paper, there are several software programs available that make designing flowcharts especially easy. Common programs that can be used to create flowcharts include SmartDraw and Visio for Windows and OmniGraffle for the Mac. Updated October 23, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/flowchart Copy

Fluid Layout

Fluid Layout A fluid layout is a type of webpage design in which layout of the page resizes as the window size is changed. This is accomplished by defining areas of the page using percentages instead of fixed pixel widths. Most webpage layouts include one, two, or three columns. In the early days of web design, when most users had similar screen sizes, web developers would assign the columns fixed widths. For example, a fixed layout may include a main content area that is 960px wide with three columns that have widths of 180px, 600px, and 180px. While this layout might look great on a 1024x768 screen, it might look small on a 1920x1080 screen and would not fit on a 800x600 screen. Fluid layouts solve this problem by using percentages to define each area of the layout. For example, instead of creating a content area of 960px, a web developer can create a layout that fills 80% of the screen and the three columns could take up 18%, 64%, and 18% respectively. By using percentages, the content can expand or shrink to fit the window of the user's computer. The CSS used to create a fixed layout vs a fluid layout is shown below. Fixed LayoutFluid Layout .content { width: 960px; } .left, .right { width: 180px; } .middle { width: 600px; } .content { width: 80%; } .left, .right { width: 18%; } .middle { width: 64%; } The CSS classes in the examples could each be assigned to a div within a page's HTML where the .left, .right, and .middle classes are enclosed within the .content class. The content class could also be a assigned to a table and the other classes could be assigned to table cells. The fixed width .content class does not require a defined width since it automatically spans the width of the enclosed divs or table cells. Fluid Layout vs Responsive Design The terms "fluid layout" and "responsive web design" are sometimes used interchangeably, but they are two different things. A page created using responsive web design includes CSS media queries, which load different styles depending on the width of the window or the type of device used to access the page. Responsive web design requires more CSS (and sometimes JavaScript) than a basic fluid layout, but it also provides more control over layout of the page. Updated May 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fluid_layout Copy

Focus

Focus In computer terminology, focus means to select an element within a user interface. When an item is "focused," it can be controlled using keyboard input. For example, clicking within a search box on a webpage focuses the field, allowing you to enter text. Focused text fields often display a visual indicator, such as an outline or halo. In the example below, the Name field is focused within the web form. Web form with focused name field Web browsers allow you to press the Tab key to cycle through focusable elements on a page. While text fields are focused by default, web developers can make other items focusable, including links, buttons, and other elements. It is also possible to choose the "tab order" of these items using the tabindex HTML attribute. The element assigned to tabindex=0 is focused when the user presses Tab, followed by tabindex=1, tabindex=2, etc. Accessibility Focusing items is essential for users with accessibility needs. Most browsers only focus form fields, but ones with assistive technology allow users to focus other elements, such as links and buttons. For example, tabbing to a link within the page content and pressing Enter is the same as clicking the link. This technology enables users to navigate webpages without using a mouse. Updated October 16, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/focus Copy

Folder

Folder A digital folder has the same purpose as a physical folder – to store documents. Computer folders can also store other types of files, such as applications, archives, scripts, and libraries. Folders can even store other folders, which may contain additional files and folders. Folders are designed for organizing files. For example, you might store your digital photos in a "Pictures" folder, your audio files in a "Music" folder, and your word processing documents in a "Documents" folder. In Windows, software programs are installed by default in the "Program Files" folder, while in OS X they are stored in the "Applications" folder. Folders are also called directories because of the way they organize data within the file system of a storage device. All folders are subfolders, or subdirectories of the root directory. For example, in Windows, C: is the root directory of the startup disk. The Internet Explorer application is installed in the C:Program FilesInternet Explorer directory, which is also the directory path of the Internet Explorer folder. While folders may contain several gigabytes of data, folders themselves do not take up any disk space. This is because folders are simply pointers that define the location of files within the file system. You can view how much data is stored in a folder by right-clicking it and selecting Properties in Windows or Get Info in OS X. To create a new folder, right-click on the desktop or an open window and select New → Folder (Windows) or New Folder (OS X). Updated June 12, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/folder Copy

Font

Font A font is a collection of characters with a similar design. These characters include lowercase and uppercase letters, numbers, punctuation marks, and symbols. Changing the font can alter the look and feel of a block of text. Some fonts are designed to be simple and easy to read, while others are designed to add a unique style to the text. For example, Arial has a simple, modern look, while Palatino has an older more traditional appearance. Serif vs Sans-Serif Fonts The two most basic categories of fonts are "serif" and "sans-serif." Small extensions on the edges of characters, such as horizontal line on the bottom of a capital "T," are called serifs. Fonts that include these small lines are called serif fonts. The word "sans" means "without," so sans-serif fonts do not have these extra lines. Generally, serif fonts have a traditional appearance and are often used in printed books and newspapers. Sans-serif fonts have a more modern look and are commonly used on the web. Most word processors allow you to select a font from a "Fonts" drop-down menu in the toolbar. You can apply a font to an entire document or to a section of highlighted text. In order to use a font, it must be installed on your computer. In Windows, you can add or remove fonts using the Fonts control panel, located in Control Panel → Appearance and Personalization → Fonts. In macOS, you can manage fonts using Font Book, located in the /Applications directory. Font vs Typeface Originally, the term "font" referred to a specific size and style of a typeface. For example, Verdana was a typeface and Verdana 16px bold would be a specific font. However, in recent years, the terms have been used interchangeably, with companies like Microsoft, Apple, and Google all using the term "font" to describe a typeface. Therefore, it is acceptable to refer to a typeface such as Roboto as a font. Updated January 9, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/font Copy

Font Case

Font Case Font case describes the way characters are capitalized within a word or phrase. "Font" refers to the typeface used to display the letters, while "case" refers to their capitalization. Below are examples of different font cases, shown in their corresponding style. UPPER CASE - every character is capitalized lower case - no letters are capitalized Sentence case - the first letter of the first word in a sentence is capitalized Title Case - the first letter of each word is capitalized camelCase - the first letter of each word within a compound word, besides the first word, is capitalized PascalCase - the first letter of each word within a compound word, including the first word, is capitalized toGGLe caSe - random characters are capitalized Specific font cases are appropriate for various applications. For example, sentence case is the grammatically correct way to type a sentence. Title case is used for proper names and titles of articles. Upper case (also "uppercase" or "ALL CAPS") can be used to bring attention to a specific word or phrase. Lower case (also "lowercase") is commonly used in online chat and text messaging since it is the fastest way to type messages. Programmers use camelCase when writing source code. PascalCase (sometimes called "UpperCamelCase" is an alternative version that some developers prefer. Toggle case may be used to modify the appearance of a handle or online identity, such as an eSports player name. It is important to use the correct font case when publishing professional articles and writing formal messages. Font case is less important with informal communication. For example, it is acceptable to type a text message in all lower case, but it is unprofessional to only use lower case when corresponding via email. If you only use upper case, it SEEMS LIKE YOU ARE SHOUTING. Keeping caps lock off and using the Shift key appropriately will help you communicate more effectively. NOTE: In macOS, you can select Edit → Transformations while working in a text editor to apply a specific font case to a block of selected text. Options include Make Upper Case, Make Lower Case, and Capitalize, which transforms the selected text to title case. Updated February 3, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/font_case Copy

Footer

Footer The term "footer" has many uses in the computer world. However, the two most common are 1) a document footer, and 2) a webpage footer. 1. Document Footer A document footer is a small section at the bottom of each page within a document. It is often used to display company data or copyright information. In longer documents, the footer may be used to specify the current section of the document as well. By default, changes made to the footer on one page will change the footer on all other pages in the section. If no sections are defined, modifying the footer will update all the pages in the document. Most word processors allow you to view and edit document footers by selecting View → Headers and Footers. This enables you to edit the content of both the header at the top of the page and the footer at the bottom. Some word processors, like Microsoft Word, allow you to simply double-click within the footer section to edit the content. If you want to change the height of the footer, you can modify the margins in the Document Properties window. Since page numbers are often placed at the bottom of each page, they are generally considered part of the footer. However, unlike most footer content, page numbers are different on each page, since they are automatically incremented. Additionally, changes made to the footer will not affect the page numbers. 2. Webpage Footer The bottom section of a webpage is also known as a footer. This area typically contains the name of the company or organization that publishes the website, along with relevant copyright information. Some websites may also include basic navigation links, such as "About Us," "Contact," and "Help." Corporate website footers often include additional links to "Terms of Use," "Privacy Guidelines," and "Advertising" pages as well. While footers are not required on webpages, they are found on nearly all major websites. HTML 5 even includes a tag, which is designed specifically for placing footer information at the bottom of a webpage. Additionally, visitors often expect to find certain information about a website when they scroll down to the bottom of a page in their web browser. Therefore, most web developers include a footer as a standard part of their website template. Updated October 2, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/footer Copy

Fork

Fork In software development, a fork is a new application developed from an existing one. When an application is "forked," it creates a new, separate program, rather than a new development branch. Open-source project forks are more common than proprietary software forks, but both are possible. Open-Source Software Because open-source software may be freely distributed and edited, anyone can legally fork an open-source application. The codebase of any open-source program or operating system may be used as the basis of a new project. Most Linux distributions, for example, are forks from earlier Linux-based operating systems. Because of Linux's popularity and open-source code base, hundreds of Linux forks exist. Examples of applications created from forks of open-source code libraries include: LibreOffice, from OpenOffice.org Collabora Online, from LibreOffice Calligra, from KOffice Basilisk, from Firefox Proprietary Software Since the source code of proprietary software is protected by copyright, outside developers cannot fork a commercial application. However, a developer may wish to fork an application to create a slightly different version for another purpose. For example, a developer may fork an image editor to create a read-only image viewer that prevents users from modifying files. NOTE: Unlike a mod, which alters or adds features to an existing application, a fork is a new, distinct program with a different name. Updated September 24, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fork Copy

Form

Form A form is a user interface element within a website or software application that allows you to enter and submit data. It contains one or more fields and an action button, such as Save or Submit. Web Forms Web forms allow you to submit information to a website via a web browser. Examples include registration forms, online surveys, and contact forms. When you click Submit, the web server hosting the website processes the data you entered in the form. For example, a feedback form may send an email to the webmaster. A registration form may create a new account in a database. A web form is a standard HTML element, created using the tag. Elements within a form may include: – a one-line text input field for name, email address, etc. – a large text field that supports multiple lines of text; often used for messages – a drop-down menu with multiple options, defined by tags – a hidden field used to submit additional data, such as a timestamp or referring page URL – a button used for submitting the data entered in the form. NOTE: Form data is often processed by a server-side script, but form validation (such as verifying required fields) is usually performed by client-side JavaScript or HTML checks. Application Forms While forms are commonly associated with websites, many software programs also include forms. For instance, database applications allow you to create custom forms to enter data more quickly. Media converters provide forms that enable you to choose audio and video compression settings. All dialog boxes, such as Print and Save As windows, are considered forms because they include selectable options and a primary action button. Updated October 20, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/form Copy

Format

Format The term "format" has several meanings, related to 1) disk formatting, 2) page formatting, and 3) file formats. 1) Disk formatting In order for storage media, such as a hard drive or flash drive to be recognized by your computer, it first needs to be initialized, or "formatted." Formatting a disk involves testing the disk and writing a new file system onto the disk. This enables the computer to read the disk's directory structure, which defines the way files and folders are organized on the disk. You can use a disk utility program to format or reformat a disk. This will create a blank, empty disk for storing your files. Therefore, only format disks that don't contain important data or make sure you have backed up your data before reformatting a disk! When you reformat a disk, it will appear to be empty. This is because the directory structure has been rewritten, making the entire disk space available for writing new data. However, the old files are still on the disk. They just don't show up since they are no longer included in the directory structure. So if you accidentally format a disk (which is pretty hard to do), you may be able to retrieve your files using a disk utility such as Norton Disk Doctor or DiskWarrior. 2) Page formatting The term "format" can also be used to describe the page layout or style of text in a word processing document. When you format the layout of a page, you can modify the page size, page margins, and line spacing. When you format the text, you can choose the font and font size, as well as text styles, such as bold, underlined, and italics. 3) File formats A file format refers to the way data is saved within a file. For example, some files are saved in a plain text format, while others are saved as binary files. Software developers often create proprietary file formats for their programs, which prevents the files from being used by other applications. Updated January 20, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/format Copy

Formula

Formula The term "formula" has several different meanings, depending on the field in which the term is used. For example, a mathematical formula is a relationship expressed using numbers and symbols. A chemical formula defines the atomic structure of a chemical compound. In computing, a formula is a calculation performed on one or more variables. Formulas and Functions Functions in computer programs often contain formulas. A simple formula, for instance, might convert centimeters to inches. This formula, along with several others, may be used within a metric conversion function. The function can apply the appropriate formula to the input (typically defined by the input parameters) and produce the resulting value as output. Formulas and functions are similar and sometimes the terms are used interchangeably. However, formulas are often the building blocks of functions, not the other way around. Spreadsheet Formulas Formulas are also used in spreadsheets. Most spreadsheet applications allow you to enter formulas into cells instead of static data. For example, if a spreadsheet contains numbers in columns A and B, column C could be used to total the numbers in the respective rows of columns A and B. Entering the formula "=A2+B2" in cell C2 would display the sum of the cells A2 and B2 in C2. Advanced spreadsheet formulas can be used to determine the average of multiple cells, divide cells by the results of other formulas, and perform other complex operations. NOTE: To enter a formula in a spreadsheet, start by typing an equal sign (=) into an empty cell. Updated June 7, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/formula Copy

FPGA

FPGA Stands for "Field-Programmable Gate Array." An FPGA is an integrated circuit that can be customized for a specific application. Unlike traditional CPUs, FGPAs are "field-programmable," meaning they can be configured by the user after manufacturing. FPGAs contain programmable logic blocks that can be wired in different configurations. These blocks create a physical array of logic gates that can be used to perform different operations. Because the gates are customizable, FPGAs can be optimized for any computing task. This gives FPGAs the potential to perform operations several times faster than a hard-wired processor. Field-programmable gate arrays are typically customized using a hardware description language, or HDL. A programmer can use HDL commands to configure the gate interconnects (how the gates connect to each other) as well as the gates themselves. For example, a gate may be assigned a boolean operator, such as AND, OR, or XOR. By linking several gates together, it is possible to perform advanced logic operations. Since FPGAs are designed to be programmed for specific applications, they are not suitable for personal computers. However, they have a wide variety of field applications. Examples include telecommunications, data centers, scientific computing, and audio/video processing. Besides being used in servers and high-end computers, they can also be implemented in electronic devices, such as TVs, radios, and medical equipment. Updated September 4, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fpga Copy

FPS

FPS Stands for "Frames Per Second." FPS is used to measure frame rate – the number of consecutive full-screen images that are displayed each second. It is a common specification used in video capture and playback and is also used to measure video game performance. On average, the human eye can process 12 separate images per second. This means a frame rate of 12 FPS can display motion, but will appear choppy. Once the frame rate exceeds 12 FPS, the frames appear less discrete and start to blur together. A frame rate of 24 FPS is commonly used for film since it creates a smooth appearance. Many video cameras record in 30 or 60 FPS, which provides even smoother motion. The 60i (60 FPS, interlaced) format was used for NTSC broadcasts and was replaced by the 60p (60 FPS, progressive scan) with HDTV. This means HDTV not only provides a higher resolution than NTSC, but also yields smoother playback. In order to take advantage of higher frame rates, the display on which the video is output must support a refresh rate that is at least as high as the frame rate. That is why most televisions and monitors support refresh rates of at least 60 hertz. FPS is also used to measure the frame rate of video games. The maximum frame rate is generally determined by a combination of the graphics settings and the GPU. For example, if you are running a new game on an old computer, you may have to reduce the quality of the graphics to maintain a high frame rate. If you have a new computer with a powerful video card, you may be able to increase the graphics settings without reducing the FPS. While some games limit the maximum frame rate, it is not uncommon for a powerful GPU to generate over 100 FPS. For a game to be playable without any lag, it helps to make sure the game maintains over 60 FPS even when a lot is happening on the screen. Therefore, it is wise to configure your graphics settings conservatively to avoid choppy gameplay. NOTE: FPS is also a common abbreviation for "First Person Shooter," a type of 3D video game that gives you the perspective of the main character. Updated February 3, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fps Copy

FPU

FPU Stands for "Floating Point Unit." An FPU is a processor or part of a processor that performs floating point calculations. While early FPUs were standalone processors, most are now integrated inside a computer's CPU. Even without a floating point unit, a CPU can handle both integer and floating point (non-integer) calculations. However, integer operations use significantly different logic than floating point operations, which makes it inefficient to use the same processor to handle both types of operations. An FPU provides a faster way to handle calculations with non-integer numbers. Any mathematical operation, such as addition, subtraction, multiplication, or division can be performed by either the integer processing unit or the FPU. When a CPU receives an instruction, it automatically sends it to the corresponding processor. For example, 12 + 5 would be processed as an integer calculation, while 1.0023 x 5.789 would get sent to the FPU. While it is possible for a programmer to write an instruction specifically for either processing unit, it is usually unnecessary. Since integer and floating point performance can vary significantly, most processor benchmarks include both types of operations. Integer calculation speed is typically listed as "integer performance" and is labeled "SPECint" in SPEC benchmarks. FPU calculation speed is often listed as "floating point performance" and can be measured in FLOPS. Updated December 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fpu Copy

FQDN

FQDN Stands for "Fully Qualified Domain Name." An FQDN is a the most complete domain name that identifies a host or server. The format is: [hostname].[domain].[tld]. For example, "www.techterms.com." is an FQDN since it contains a hostname ("www") and a domain name ("techterms.com"), followed by a trailing period. The name "techterms.com" is not fully qualified because it does not include a hostname or end with a period. An FQDN can be broken down into four parts: Hostname: www, mail, ftp, store, support, etc. Domain: apple, microsoft, ibm, facebook, etc. Top level domain (TLD): .com, .net, .org, .co.uk, etc. Trailing period: the final period in an FQDN indicates the end of the name, implying the previous string is the TLD. Hostname and Domain Name The domain and TLD comprise the domain name, while the hostname specifies different services and protocols for the domain. For example, "mail.example.com" is often the required format when configuring the SMTP server for an email account. The server address "ftp.example.com" is commonly used when connecting to an FTP server. Name servers typically use the naming convention "ns1.example.com" and "ns2.example.com". The hostname may also specify a website subdomain. The most popular subdomain is "www." Other common subdomains include "support," "dev," "store," and "forum." Some sites use variations of "www" such as "web", "ww1," "www2," etc. The Trailing Period Technically, a fully-qualified domain name includes a trailing period, which indicates the end of the name. Since a hostname can include multiple subdomains, such as "en.support.example.com," it is more reliable to process an FQDN backwards, beginning with the TLD and ending with the hostname. Ironically, the period serves as the starting point when a computer processes an FQDN. While the trailing dot is part of a fully-qualified domain name, in most cases, it is implied. For example, you don't need to enter the period when typing in a web address in the address bar of your web browser or when entering the mail server in your email client. In other places, such as a DNS zone file, it is important to include the trailing period for each FQDN. NOTE: A domain name that includes the domain, TLD, and period, but no hostname is called a partially-qualified domain name, or PQDN. Updated December 9, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fqdn Copy

Fragmentation

Fragmentation The most efficient way to store a file is in a contiguous physical block. However, over time, as a storage device reads and writes data, fewer blocks of free space are available. In some cases, it may be necessary to split a file into multiple areas of a storage device. This is called file fragmentation. While computers can read fragmented files, it is less efficient than reading a file saved in a single block. When an HDD accesses a fragmented file, the drive head has to jump to multiple areas to read it, which can cause a noticeable slowdown. Fragmentation may also occur on an SSD, and while the effect isn't as significant, it will still take longer for an SSD to read a fragmented file. If your HDD or SSD has a large number of fragmented files, it can significantly affect your computer's performance. Ideally, an HDD or SSD would have no fragmented files and different types of data would be organized into to different physical areas of the storage device. For example, system files, applications, and documents would all be organized into separate groups. This arrangement would provide the most efficient data access with the shortest seek time, and therefore the fastest performance. A defragmentation utility can defragment individual files, while an optimization utility will actually separate and move different types of files to different areas of the storage device. Since fragmentation can cause disk performance to decline over time, most modern operating systems include some level of automatic defragmentation. While the OS may not offer as comprehensive defragmentation as a specialized disk utility, it can still help maintain the performance of your storage device. NOTE: Another type of fragmentation is memory fragmentation, which refers to fragmented data in RAM. When memory is becomes fragmented, more RAM is allocated than what is used, resulting in less available memory. Updated June 25, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/fragmentation Copy

Frame

Frame In the digital world, the term "frame" can refer to different things depending on the context. Examples include video production, desktop publishing, and web design. 1. Video and Animation In video production and animation, a frame is a single still image within a sequence of images. When played in quick succession, these frames create the illusion of motion. The number of frames displayed per second (FPS) is a key metric in video quality, with standard frame rates being 24 or 30 FPS for film and 60 FPS or higher for video games. The power of the GPU often determines the maximum number of frames per second. Analog film frames (which are now usually digital) 2. Desktop Publishing In desktop publishing and graphic design, frames refer to rectangular spaces where you can place text, images, or other elements. Frames allow you to organize content visually, giving you precise control over the layout and composition of a document. For example, in QuarkXPress, you can create a frame with a custom border, background color, and opacity. 3. Web Design In the early days of the web, HTML frames allowed web designers to divide a browser window into multiple sections. A frame-based layout might have included a header frame at the top, a sidebar frame on the left side, and a third frame for the main content. Each frame displayed a separate webpage loaded from its own URL. HTML frames posed user experience challenges with scollbars appearing at different screen sizes. They also created security issues by loading multiple pages at once. In the early 2000s, improvements in HTML and CSS made frames obsolete. Web designers can now build structured layouts without separating pages into multiple frames. Updated September 18, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/frame Copy

Framework

Framework A framework, or software framework, is a platform for developing software applications. It provides a foundation on which software developers can build programs for a specific platform. For example, a framework may include predefined classes and functions that can be used to process input, manage hardware devices, and interact with system software. This streamlines the development process since programmers don't need to reinvent the wheel each time they develop a new application. A framework is similar to an application programming interface (API), though technically a framework includes an API. As the name suggests, a framework serves as a foundation for programming, while an API provides access to the elements supported by the framework. A framework may also include code libraries, a compiler, and other programs used in the software development process. Several different types of software frameworks exist. Popular examples include ActiveX and .NET for Windows development, Cocoa for Mac OS X, Cocoa Touch for iOS, and the Android Application Framework for Android. Software development kits (SDKs) are available for each of these frameworks and include programming tools designed specifically for the corresponding framework. For example, Apple's Xcode development software includes a Mac OS X SDK designed for writing and compiling applications for the Cocoa framework. In many cases, a software framework is supported natively by an operating system. For example, a program written for the Android Application Framework will run on an Android device without requiring other additional files to be installed. However, some applications require a specific framework in order to run. For example, a Windows program may require Microsoft .NET Framework 4.0, which is not installed on all Windows machines (especially PCs running older versions of Windows). In this case, the Microsoft .NET Framework 4 installer package must be installed in order for the program to run. NOTE: While frameworks generally refer to broad software development platforms, the term can also be used to describe a specific framework within a larger programming environment. For example, multiple Java frameworks, such as Spring, ZK, and the Java Collections Framework (JCF) can be used to create Java programs. Additionally, Apple has created several specific frameworks that can be accessed by OS X programs. These frameworks are saved with a .FRAMEWORK file extension and are installed in the /System/Library/Frameworks directory. Examples of OS X frameworks include AddressBook.framework, CoreAudio.framework, CoreText.framework, and QuickTime.framework. Updated March 7, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/framework Copy

Freeware

Freeware Freeware is software that is free to use. Unlike commercial software, it does not require any payment or licensing fee. It is similar to shareware, but will not eventually ask you for payment to continue using the software. You can legally download and use freeware for as long as you want without having to pay for it. Many types of software programs are offered as freeware, including games, utilities, and productivity applications. Since the software is free, you might wonder what incentive developers have to create freeware programs. Below are a few reasons a program might be offered as freeware: To offer a program developed by a non-profit or educational institution to the public To promote a brand or drive traffic to a company's website To generate revenue through advertisements or in-app purchases within the program To generate revenue by offering other programs during the installation process To provide a "lite" version of a program that may lead users to upgrade to the full-featured version While freeware is free to use, it is still copyrighted and may include a license agreement that restricts usage or distribution of the software. It is also not the same thing as open source software (or "free software"), which allows you to edit and redistribute the program's source code. Updated January 23, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/freeware Copy

Frequency

Frequency Frequency measures the number of times something occurs in a specific amount of time. For example, if someone visits the grocery store twice a week, her shopping frequency is 2 visits per week. While frequency can be used to measure the rate of any action, in technical applications it is typically used to measure wave rates or processing speed. These frequencies often occur multiple times per second and therefore are measured in hertz (Hz) or related units of measurement, such as megahertz or gigahertz. Wave Rates Frequency can be used to measure the rate of waves, such as sound waves, radio waves, and light waves. Audible sound waves have a frequency of roughly 20 Hz to 20,000 Hz (20 kilohertz). Therefore, the sounds you hear have waves that occur anywhere from 20 to 20,000 times per second. Lower frequencies produce low-pitched sounds and are called bass frequencies. Higher frequencies produce high-pitched sounds and are called treble frequencies. Sound waves are compression waves, which actually move small amounts of air particles. Other waves, such as radio waves and light waves do not require air particles to travel. These waves are part of the electromagnetic spectrum and generally have much higher frequencies than sound waves. For example, most radio waves are measured in megahertz (1,000,000 hertz) or gigahertz (1,000,000,000 hertz). Visible light has a frequency of about 400 terahertz to 780 terahertz. Processing Speed In the computer world, frequency is often used to measure processing speed. For example, clock speed, measures how many cycles a processor can complete in one second. If a computer has a 3.2GHz processor, it can can complete 3,200,000,000 cycles per second. FLOPS, which is used to measure floating point performance, is also a frequency-based calculation (operations per second). Finally, computing speed may also be defined in MIPS, which measures instructions per second. Updated December 27, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/frequency Copy

Friend

Friend A friend, in the traditional sense of the word, is a close acquaintance. An online friend, however, is simply a person added to your list of friends on a social networking website. For example, on Facebook, you can select a user and click "Add as Friend" to send a friend request to that user. When the user receive your friend request, he or she may choose to accept or decline the invitation. If the user accepts your request, he or she will be added to your list of friends. Likewise, you will be added to that user's list of friends as the same time. MySpace includes a similar feature. Once you become friends with a user, that person will be able to access your profile with the additional viewing rights. This means he or she may be able to view more or your profile and post comments on the "wall" of your profile page. Since non-friends may not be able to view any of your profile, it is common practice to accept most friend requests. Of course, this has led to a rather liberal definition of "friend," since many people have hundreds of online friends who they hardly know. The word "friend" can also be used as a verb, which means adding a user as a friend. When you delete a friend, you "unfriend" that person. Updated November 20, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/friend Copy

Friendly URL

Friendly URL A friendly URL is a Web address that is easy to read and includes words that describe the content of the webpage. This type of URL can be "friendly" in two ways. 1) It can help visitors remember the Web address, and 2) it can help describe the page to search engines. User-Friendly URLs Friendly URLs that are short and easy to remember are considered "user-friendly URLs." These URLs help visitors remember web addresses, which means they can revisit pages by simply typing in the URL address bar. For example, a company may use the URL "www.[company].com/support/" for the support section of their website. This is much easier to remember than a long convoluted URL, like "www.[company].com/section/support/default.aspx?id=1&lang=en". Since dynamic sites often load different content based on the variables in the URL (which are usually listed after the question mark), creating user-friendly URLs is not always easy. Therefore, many webmasters now use a strategy called "URL rewriting" to create simpler URLs. This method tells the Web server to load a different URL than the one in the address bar. Therefore, a simple Web address can point to a more complex URL with lots of variables. Since the URL is redirected at the server level, visitors only see the simple Web address. Search Engine-Friendly URLs While user-friendly URLs are helpful for visitors, most webmasters are more concerned with creating search engine-friendly URLs. These URLs include important keywords that describe the content of the page. Since most search engines include the Web address as part of the information that describes a page, placing keywords in the URL can help boost the ranking of the page. Therefore, this strategy has become a popular aspect of search engine optimization or SEO. For example, a blog that includes tips for Windows 7 may have a URL like "blogger.blogger.com/2011/02/windows7.html". A search engine friendly version of this URL may be "blogger.blogger.com/2011/02/helpful-tips-for-using-windows-7.html". While this type of descriptive URL may help with search engine ranking, it is important not to create ridiculously long URLs just to describe the page. After all, search engines still focus primarily on the content of each page when indexing websites. Updated February 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/friendly_url Copy

Frontend

Frontend The frontend of a software program or website is everything with which the user interacts. From a user standpoint, the frontend is synonymous with the user interface. From a developer standpoint, it is the interface design and the programming that makes the interface function. Conversely, the backend includes functions and data processing that takes place behind the scenes. One of the primary goals of frontend development is to create a smooth or "frictionless" user experience. In other words, the front end of an application or website should be intuitive and easy to use. While this sounds like a simple goal, it can be surprisingly complex since not all users or devices are the same. For example, an app developed for a mobile device requires a significantly different frontend than a desktop application. Websites must work well on multiple devices and screen sizes, which is why modern web development typically involves responsive design. Examples of frontend elements include: application or page layout graphics audio and video elements text content user interface elements (buttons, links, toolbars, navigation bars, etc.) input areas (dialog boxes), form fields, text areas, etc.) user flow (how one interface leads to the next) user preferences, themes, and customizations User input is received through the frontend and processed in the backend of a program or website. Backend code reads and writes data and sends output to the user via the frontend. Since the backend and frontend of an app or website work together, software jobs often require both frontend and backend development. Developing for both ends is called full-stack development. NOTE: Frontend may also be written "front end" (as a noun) or "front-end" (as an adjective). For simplicity, the closed compound word "frontend" has become an acceptable term for both. Updated April 18, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/frontend Copy

Frozen

Frozen While "frozen" describes the state of Minnesota from November to March, it also refers to an unresponsive computer. When a computer does not respond to any user input, it is said to be frozen. When a computer system freezes, or "locks up," the screen stays the same and does not change no matter what buttons you press on your mouse or keyboard. You can tell if you computer has frozen if the cursor will not move when you move the mouse. A computer typically freezes due to a software malfunction that causes the operating system to "hang." This may happen because of many possible reasons, including a memory leak, an infinite calculation, or another reason. A computer can also freeze because of a hardware malfunction, such as a bad RAM chip or a processor error. Since computers are not supposed to freeze, a software crash is often due to a software programming error or unrecognizable input. Fortunately, modern operating systems, such as Mac OS X are designed so that if one program crashes, it will not affect other programs and the computer will not freeze. If your computer does freeze, you will need to restart the computer to make it function again. You can typically force your computer to shut down by holding the power button for several seconds. And remember - since most computer freezes happen unexpectedly, it is a good idea to save your work frequently! Updated September 16, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/frozen Copy

FSB

FSB Stands for "Front-Side Bus." An FSB was the interface between a computer's CPU and its memory controller. All activity between the CPU and the rest of the computer traveled across the FSB, so it was also called the "system bus." FSBs were common in computers in the 1990s and 2000s, but they became obsolete when processor makers started integrating the memory controller directly into the CPU. A computer's CPU used the FSB to connect to a pair of chips, called the northbridge and the southbridge and referred to collectively as a chipset. The CPU first connected to the northbridge, which then communicated with the system RAM and high-speed expansion slots (like AGP and PCI Express). The northbridge used an internal bus to connect to the southbridge, which communicated with the rest of the computer — the PCI bus, storage devices, USB controller, and other I/O components. The speed of the front-side bus — how many times it transfers data between the CPU and the system memory — was measured in megahertz. The CPU's speed was linked directly to the speed of the FSB and was the result of the bus speed multiplied by a clock multiplier set in the system's BIOS. For example, a 2.4 GHz CPU using a 400 MHz bus used a clock multiplier of 6.0. However, larger clock multipliers made computers less efficient because the processor often had to wait for data to transmit over the system bus. Eventually, CPU designers like Intel and AMD created new processors that integrated the memory controller directly into the CPU, bypassing the need for the northbridge and the front-side bus. Updated January 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/fsb Copy

FTP

FTP Stands for "File Transfer Protocol." FTP is a protocol designed for transferring files over the Internet. Files stored on an FTP server can be accessed using an FTP client, such as a web browser, FTP software program, or a command line interface. An FTP server can be configured to enable different types of access. For example, an "anonymous FTP" configuration allows anyone to connect to the server. However, anonymous users may only be allowed to view certain directories and may not be able to upload files. If anonymous FTP access is disabled, users are required to log in in order to view and download files. The standard FTP protocol is not encrypted, meaning it is vulnerable to packet sniffers and other types of snooping attacks. Therefore, the FTPS and SFTP protocols were developed to provide secure FTP connections. FTPS (FTP with SSL security) provides SSL encryption for all FTP communication. SFTP (SSH File Transfer Protocol) is a secure version of FTP that uses SSH to encrypt all data transfers. To you connect to an FTP server, you first need to enter the server name and port number. The server name often starts with "ftp," such as "ftp.example.com." The standard port number for FTP is 21, while SFTP uses port 22 (SSH). If you connect via FTPS, you might be required to enter a custom port number, but the most common one is 990. In order to access an SFTP or FTPS server, you will also need to enter a username and password. Updated January 30, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ftp Copy

Full-Duplex

Full-Duplex Full-duplex, or simply "duplex," is a type of communication in which data can flow two ways at the same time. Full duplex devices, therefore, can communicate back and forth simultaneously. Telephones are common examples of full-duplex devices. They allow both people to hear each other at the same time. In the computer world, most network protocols are duplex, enabling hardware devices to send data back and forth simultaneously. For example, two computers connected via an Ethernet cable can send and receive data at the same time. Wireless networks also support full-duplex communication. Additionally, modern I/O standards, such as USB and Thunderbolt, are full-duplex. The terms duplex and full-duplex can be used interchangeably since both refer to simultaneous bidirectional communication. Full-duplex is often used in contrast to half-duplex, which refers to bidirectional communication, but not at the same time. Simplex communication is even more limited and only supports data transmission in one direction. NOTE: Full-duplex is sometimes abbreviated "FDX." Updated April 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/full-duplex Copy

Function

Function In mathematics, a function is defined as a relationship between defined values and one or more variables. For example, a simple math function may be: y = 2x In this example, the relationship of y to x is that y is twice as much as the value assigned to x. While math functions can be far more complex than this, most are simple relative to functions used in computer programming. This may be why math functions are often referred to as "expressions," while computer functions are often called "procedures" or "subroutines." Computer functions are similar to math functions in that they may reference parameters, which are passed, or input into the function. If the example above were written as a computer function, "x" would be the input parameter and "y" would be the resulting output value. It might look something like this: function double(x) {   $y = 2 * x;   return $y; } The above example is a very basic function. Most functions used in computer programs include several lines of instructions and may even reference other functions. A function may also reference itself, in which case it is called a recursive function. Some functions may require no parameters, while others may require several. While it is common for functions to return variables, many functions do not return any values, but instead output data as they run. Functions are sometimes considered the building blocks of computer programs, since they can control both small and large amounts of data. While functions can be called multiple times within a program, they only need to be declared once. Therefore, programmers often create "libraries" of functions that can referenced by one or more programs. Still, the source code of large computer programs may contain hundreds or even thousands of functions. Updated December 28, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/function Copy

Function Key

Function Key A function key is one of the "F" keys along the top of a computer keyboard. On some keyboards, these range from F1 to F12, while others have function keys ranging from F1 to F19. Function keys may be used as single key commands (e.g., F5) or may be combined with one or more modifier keys (e.g., Alt+F4). In either case, function keys typically serve as keyboard shortcuts to perform a specific function. While function keys have been included on keyboards since the 1960s, they have not had a standard purpose. Over the years, various operating systems and applications have made use of function keys in different ways. While each software developer can decide how to use the "F keys" in his or her program, some actions have been universally recognized. Below are some common uses for function keys in Windows: F1 - Display help screen F2 - Highlight file or folder for renaming F3 - Open search tool Alt+F4 - Close the current window F5 - Refresh the contents of a window or webpage F8 - Boot Windows into Safe Mode by holding F8 during startup Higher number function keys are often used for common system actions, such as adjusting the speaker volume or the display brightness. Based on your system settings, you may need to hold the "Fn" modifier key to perform system actions. In macOS, this is typically reversed, meaning the function keys perform system actions by default. For example, pressing F1 lowers the brightness, while F2 increases it. Pressing Fn+F1 sends an "F1" command rather than changing the brightness. For this reason, most Mac keyboards have icons on the function keys displaying the default function of each key. Updated May 31, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/function_key Copy

Gamma Correction

Gamma Correction Gamma refers to the brightness of a monitor or computer display. It is a setting that determines how bright the output of the display will be. Therefore, "gamma correction" is used to alter the output levels of a monitor. While the gamma setting affects the brightness of a display, it is not identical to the brightness. This is because gamma adjustments are not linear, like brightness levels are. Instead, the gamma setting applies a function to the input levels, which produces the final output level. You can visualize this function as a curved line instead of a straight one. This means the extreme dark and light points are not as affected as the midtones, which are enhanced more because of the non-linear function. Therefore, the gamma setting affects both the brightness and the contrast of the display. The reason gamma correction is used is because the input signal, or voltage, sent to a monitor is not high enough to create a bright image. Therefore, if the gamma is not altered, the images on your screen would be dark and difficult to see. By applying gamma correction, the brightness and contrast of the display are enhanced, making the images appear brighter and more natural looking. NOTE: Common gamma settings are 2.2 for PC monitors and 1.8 for Macintosh monitors. These gamma settings apply an inverse luminance curve to the display's native gamma, which produces natural brightness and contrast levels. Updated August 7, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gammacorrection Copy

Garbage Collection

Garbage Collection In computer science, garbage collection is a type of memory management. It automatically cleans up unused objects and pointers in memory, allowing the resources to be used again. Some programming languages have built-in garbage collection, while others require custom functions to manage unused memory. A common method of garbage collection is called reference counting. This strategy simply counts how many references there are to each object stored in memory. If an object has zero references, it is considered unnecessary and can be deleted to free up the space in memory. Advanced reference counting detects objects that only reference each other, which indicates the objects are unused by the parent process. Garbage collection may also be done at compile-time, when a program's source code is compiled into an executable program. In this method, the compiler determines which resources in memory will never be accessed after a certain time. It can then add instructions to automatically deallocate those resources from memory. While this is an effective way to eliminate unused objects, it must be done conservatively to avoid deleting references required by the program. Garbage collection is an important part of software development since it keeps programs from using up too much RAM. Besides helping programs run more efficiently, it can also prevent serious bugs, such as memory leaks, that can cause a program to crash. Updated January 20, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/garbage_collection Copy

Gateway

Gateway A gateway is a hardware device that acts as a "gate" between two networks. It may be a router, firewall, server, or another device that enables traffic to flow in and out of the network. While a gateway protects the nodes within network, it is also a node itself. The gateway node is considered to be on the "edge" of the network as all data must flow through it before coming in or going out of the network. It may also translate data received from outside networks into a format or protocol recognized by devices within the internal network. A router is a common type of gateway used in home networks. It allows computers within the local network to send and receive data over the Internet. A firewall is a more advanced type of gateway, which filters inbound and outbound traffic, disallowing incoming data from suspicious or unauthorized sources. A proxy server is another type of gateway that uses a combination of hardware and software to filter traffic between two networks. For example, a proxy server may only allow local computers to access a list of authorized websites. NOTE: Gateway is also the name of a computer hardware company founded in the United States in 1985. The company was acquired by Acer in 2007 but still sells computers under the Gateway name. Updated February 12, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gateway Copy

Gbps

Gbps Stands for "Gigabits per second." 1Gbps is equal to 1,000 Megabits per second (Mbps), or 1,000,000,000 bits per second. Gbps is commonly used to measure data transfer speeds between hardware devices. For many years, data transfer speeds were only measured in Mbps and Kbps. However, modern hardware interfaces can now transfer data over one gigabit per second, which makes Gbps a necessary unit of measurement. Examples of these interfaces include SATA 3 (6Gbps), USB 3.0 (5Gbps), and Thunderbolt (10Gbps). Additionally, Gigabit Ethernet can transfer data up to 1Gbps. NOTE: The lowercase "b" is Gbps indicates it stands for "Gigabits" rather than "Gigabytes." Since one byte equals eight bits, 1GBps is equal to 8Gbps. While storage capacity is typically measured in bytes, data transfer speeds are typically measured in bits. Therefore, Gbps is much more commonly used than GBps. Updated July 23, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gbps Copy

GDPR

GDPR Stands for "General Data Protection Regulation." GDPR, also known as Regulation (EU) 2016/679, is a European Union law drafted on April 27, 2016 and instituted on May 25, 2018. It replaces the EU Data Protection Directive, which was adopted in 1995. The primary purpose of GDPR is to protect the personal data of residents of countries within the European Union (EU). The 88-page GDPR document begins by stating the protection of people in regards to their personal data is a fundamental human right. The rules and guidelines within the General Data Protection Regulation are designed to support this premise. It states that all data controllers (organizations that collect and store user data) must protect the data, give users access to the data, and make the data easily transferrable. GDPR updates the previous Data Protection Directive to be relevant to modern times and technologies. For example: Regulation 42 states that data processors (such as websites) must make their identity clear and ask users for consent before storing their data. Regulation 49 bans malicious activity in regards to data, such as hacking and denial of service attacks. Regulation 83 states that data controllers and processors should mitigate security risks by using encryption. Article 33.1 requires organizations to inform their users within 72 hours of when a data breach has been discovered. To Whom Does GDPR Apply? The GDPR guidelines must be followed by all public and private companies and organizations within the EU. Fines and penalties may be assessed to entities that do not conform to the regulations. While GDPR is commonly associated with IT industries, such as e-commerce websites and cloud services, it applies to all EU organizations that store personal data. Examples include health care services, law firms, educational institutions, scientific research firms, and government entities. While GDPR is enforceable within the European Union, it also applies to companies and organizations outside the EU that do business with EU residents. For example, if a U.S.-based company stores data for individuals living in Sweden, it must conform to the GDPR regulations. On the consumer side, GDPR protects both EU citizens and people who live and work in the EU. The rules apply to individuals engaged in business transactions, but they do not apply to personal or household activities. Updated May 23, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gdpr Copy

Generative AI

Generative AI Generative AI is a category of artificial intelligence model designed to generate new data. These models are trained on large data sets that teach them to identify patterns and structure in text, images, video, and audio. Once trained, their algorithms can generate new data with similar properties in response to user input. Different models can generate paragraphs of natural-sounding text, render images in different artistic styles, or create audio samples. Large Language Models Generative text models (also called large language models) can generate blocks of text based on a user prompt. Even though the output may look like a person wrote it, generative text is really a form of advanced predictive text. The model generates one word at a time, predicting the next word in a sequence based on its training material. Given a large enough set of data, generative language models can generate essays, song lyrics, or even functional source code in multiple programming languages. Different large language models use different sets of training data. The largest models use a wide variety of text, including books, news articles, social media posts, research papers, and essays. Examples of generative text models are ChatGPT, Google Bard, and Bing Chat. Diffusion Models Generative image models create images using a method called diffusion modeling. These models train using machine learning — analyzing large sets of data tagged by human trainers with descriptions for both the content and artistic style of each image. The model uses these tags during training to associate specific patterns and structures within an image with the relevant text. An image generated using Stable Diffusion During training, a diffusion model first disassembles an image in a long series of steps, slowly adding random noise. After reducing the original image to static, the model slowly reassembles the image based on its content tags by generating detail to replace the random noise. It attempts this process countless times as its neural network adjusts variables until the reproduced image resembles the original. Once trained, a model can create entirely new images from a user prompt using the keywords and tags it has learned. Examples of generative image models include DALL-E, Midjourney, and Stable Diffusion. Ethical Concerns The rapid rise in generative AI technology comes with some ethical concerns. These models require vast sets of training data — dozens of terabytes of text for a language model and hundreds of millions of images for a diffusion model. Those training sets often include copyrighted material and can create derivative material based on those works without crediting or compensating the original creator. Training sets may also include pictures of people without their consent. Finally, whether the output of a generative AI can be copyrighted (and who owns that copyright) is a legally unsettled area. Generative AI models are only as good as their set of training data allows, and problems within those sets may appear in a model's output. If a model's training data set includes material with biased language or imagery, it may pick up on those biases and include them in its output. While the developers of generative AI models often filter out hate speech, subtle bias is much more difficult to detect and remove. Additionally, a model trained on data that contains factually-incorrect information will pass that information along, potentially misleading its users. Updated May 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/generative_ai Copy

Gibibyte

Gibibyte A gibibyte (GiB) is a unit of data storage equal to 230 bytes, or 1,073,741,824 bytes. Like kibibyte (KiB) and mebibyte (MiB), gibibyte has a binary prefix to remove any ambiguity when compared to the multiple possible definitions of a gigabyte. A gibibyte is similar in size to a gigabyte (GB), which is 109 bytes (1,000,000,000 bytes). A gibibyte is 1,024 mebibytes. 1,024 gibibytes make up a tebibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a gigabyte could mean different numbers depending on what was measured — memory in binary values and storage space in decimal values. Using gibibyte to refer specifically to binary measurements helps to address the confusion. NOTE: Windows operating systems calculate storage and file sizes using mebibytes (MiB) and gibibytes (GiB), but use the labels for megabytes (MB) and gigabytes (GB). MacOS uses actual megabytes and gigabytes when calculating sizes. The same disk or file will show slightly different values in each operating system — for example, a file that is exactly 1,000,000,000 bytes will appear as 953.67 MB in Windows and 1 GB in MacOS. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 1, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/gibibyte Copy

GIF

GIF Stands for "Graphics Interchange Format." GIF is an image file format commonly used for images on the web and sprites in software programs. Unlike the JPEG image format, GIFs uses lossless compression that does not degrade the quality of the image. However, GIFs store image data using indexed color, meaning a standard GIF image can include a maximum of 256 colors. The original GIF format, also known as "GIF 87a," was published by CompuServe in 1987. In 1989, CompuServe released an updated version of the format called "GIF 89a." The 89a format is similar to the 87a specification but includes support for transparent backgrounds and image metadata. Both formats support animations by allowing a stream of images to be stored in a single file. However, the 89a format also includes support for animation delays. Even though the GIF format was published more than a quarter-century ago, it is still widely used on the web. Nearly all GIFs use the 89a format. You can check the version of a specific GIF image by opening it in a text editor and looking at the first six characters listed in the document (GIF87a or GIF89a). Since GIFs may only contain 256 colors, they are not ideal for storing digital photos, such as those captured with a digital camera. Even when using a custom color palette and applying dithering to smooth out the image, photos saved in the GIF format often look grainy and unrealistic. Therefore, the JPEG format, which supports millions of colors, is more commonly used for storing digital photos. GIFs are better suited for buttons and banners on websites since these types of images typically do not require a lot of colors. However, most web developers prefer to use the newer PNG format, since PNGs support a broader range of colors and include an alpha channel. (The alpha channel makes it possible for a single image with transparency to blend smoothly with any webpage background color.) Still, neither JPEGs nor PNGs support animations, so animated GIFs remain popular on the web. NOTE: A GIF image can actually store more than 256 colors. This is accomplished by separating the image into multiple blocks, which each continue unique 256 color palettes. The blocks can be combined into a single rectangular image, which can theoretically produce a "true color" or 24-bit image. However, this method is rarely used because the resulting file size is much larger than a comparable .JPEG file. How to pronounce "GIF" According to Steve Wilhite, the creator of the original GIF format, it is pronounced "jiff" (like the peanut butter brand). However, most people still pronounce it "gif" (with a hard G). Therefore, either pronunciation is acceptable. File extension: .GIF Updated August 20, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gif Copy

Gigabit

Gigabit A gigabit is 109 or 1,000,000,000 bits. One gigabit (abbreviated "Gb") is equal to 1,000 megabits or 1,000,000 kilobits. It is one-eighth the size of a gigabyte (GB). Gigabits are most often used to measure data transfer rates of local networks and I/O connections. For example, Gigabit Ethernet is a common Ethernet standard that supports data transfer rates of one gigabit per second (Gbps) over a wired Ethernet network. Modern I/O technologies, such as USB 3.0 and Thunderbolt are also measured in gigabits per second. USB 3.0 can transfer data at up to 5 Gbps, while Thunderbolt 1.0 can transfer data bidirectionally at 10 Gbps. While gigabits and gigabytes sound similar, it is important not to confuse the two terms. Since there are eight bits in one byte, there are also eight gigabits in one gigabyte. Gigabits are most often used to describe data transfer speeds, while gigabytes are used to measure data storage. Updated October 3, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gigabit Copy

Gigabyte

Gigabyte A gigabyte is 109 or 1,000,000,000 bytes. One gigabyte (abbreviated "GB") is equal to 1,000 megabytes and precedes the terabyte unit of measurement. While a gigabyte is technically 1,000,000,000 bytes, in some cases, gigabytes are used synonymously with gibibytes, which contain 1,073,741,824 bytes (1024 x 1,024 x 1,024 bytes). Gigabytes, sometimes abbreviated "gigs," are often used to measure storage capacity. For example, a standard DVD can hold 4.7 gigabytes of data. An SSD might hold 256 GB, and a hard drive may have a storage capacity of 750 GB. Storage devices that hold 1,000 GB of data or more are typically measured in terabytes. RAM is also usually measured in gigabytes. For example, a desktop computer may come with 16 GB of system RAM and 2 GB of video RAM. A tablet may only require 1 GB of system RAM since portable apps typically do not require as much memory as desktop applications. NOTE: For a list of other units of measurement, view this Help Center article. Updated February 26, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gigabyte Copy

Gigaflops

Gigaflops Gigaflops is a unit of measurement used to measure the performance of a computer's floating point unit, commonly referred to as the FPU. One gigaflops is one billion (1,000,000,000) FLOPS, or floating point operations, per second. The term "gigaflops" appears to be plural, since it ends in "s," but the word is actually singular since FLOPS is an acronym for "floating point operations per second." This is why gigaflops is sometimes written as "gigaFLOPS." Since gigaflops measures how many billions of floating point calculations a processor can perform each second, it serves as a good indicator of a processor's raw performance. However, since it does not measure integer calculations, gigaflops cannot be used as a comprehensive means of measuring a processor's overall performance. Updated January 22, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gigaflops Copy

Gigahertz

Gigahertz A gigahertz (GHz) is a unit of measurement for frequency, equal to 1,000,000,000 hertz (Hz) or 1,000 megahertz (MHz). Since one hertz means that something cycles at a frequency of once per second, one gigahertz means that whatever is being measured cycles one billion times per second. While it can measure anything that repeats within that range, in the context of computers and electronics it often refers to the speed of a computer's processor or the radio frequency of Wi-Fi and other wireless communication. For decades, the clock speed of a computer's CPU was measured in MHz, with higher clock speeds generally indicating a faster processor. In the early 2000s, Intel, AMD, and IBM all released processors with clock speeds over 1,000 MHz, so they began measuring clock speeds in GHz rather than MHz. Within several years of the first 1 GHz processors, new processors had clock speeds over 4 GHz. The speed of a CPU is measured in GHz, although other factors also affect performance Running a processor at high clock speeds requires significant power and produces a lot of heat. After reaching the 4-5 GHz range, chip designers stopped emphasizing clock speed and improved processor performance by other methods like adding more processor cores. For example, a single-core Pentium 4 from 2005 and an 8-core i7 from 2021 run at similar clock speeds (between 3.2 and 3.8 GHz), but the modern i7 performs significantly faster. In addition to processor clock speed, gigahertz is used to measure the frequency of certain radio waves. Wi-Fi networks operate over multiple bands in this part of the spectrum depending on the technology used, including the 2.4 GHz band (for 802.11b, g, and Wi-Fi 4), the 5GHz band (for Wi-Fi 5 and 6), and the 6 GHz band (for Wi-Fi 6E and 7). Bluetooth wireless connections also use the 2.4 GHz band for short-range personal networks, as do other wireless audio, video, and smart home devices. Finally, some 5G mobile networks use radio waves measured in GHz — mid-band networks transmit between 1.7 and 4.7 GHz, and high-band networks use frequencies between 24 and 47 GHz. NOTE: Other computer components also have clock speeds that can be measured in GHz, like DDR4 and DDR5 memory. Updated September 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/gigahertz Copy

GIGO

GIGO Stands for "Garbage In, Garbage Out." GIGO is a computer science acronym that implies bad input will result in bad output. Because computers operate using strict logic, invalid input may produce unrecognizable output, or "garbage." For example, if a program asks for an integer and you enter a string, you may get an unexpected result. Similarly, if you try to open a binary file in a text editor, it may display unreadable content. GIGO is a universal computer science concept, but it only applies to programs that process invalid data. Good programming practice dictates that functions should check for valid input before processing it. A well-written program will avoid producing garbage by not accepting it in the first place. Requiring valid input also helps programs avoid errors that can cause crashes and other erratic behavior. NOTE: Because the related terms FIFO and LIFO are pronounced with a long "i," GIGO is typically pronounced "guy-go" (not gih-go). This also helps avoid confusion with the prefix "giga," which is pronounced with a soft "i." Updated March 4, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gigo Copy

GIS

GIS Stands for "Geographic Information Systems." GIS systems are tools that gather, analyze, and display information about the surface of the earth. These systems chart data on a map, separated into distinct layers that can be analyzed to find patterns and identify trends. Researchers use GIS tools to create maps of physical features like elevation, tree cover, water, and roadways, as well as more abstract data like population density, traffic patterns, and climate trends. A GIS system combines both hardware and software by using GPS-equipped sensors and measurement tools to collect data, which is input into the software system for analysis. Once the data is collected, the systems save each data type to a distinct layer. Layers can be overlaid on a map and analyzed for relationships between the physical features of the landscape and other data layers. Layers can present information using raster graphics like satellite and aerial photographs or vector graphics in the form of points, lines, and polygons. Many businesses, research institutions, and government agencies use GIS systems for research and data analysis. For example, a GIS system can map seismic data as tracked by a series of sensors to locate the epicenter of an earthquake. Census-takers can use GIS systems to map changing population trends over time, while businesses can use their own GIS data to map where their customers come from to help choose a location for a new store. Updated March 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/gis Copy

Glyph

Glyph A glyph is an individual letter, number, punctuation mark, or other symbol in a font. The term is sometimes used interchangeably with "character," although it is more accurate to say that a glyph is the graphic representation of a character. Every font is made up of a set of glyphs that represent characters as defined by the Unicode standard. Not every font has a glyph for every possible character. Some fonts focus on a particular alphabet like Latin, Cyrillic, or Hebrew. Other fonts offer expanded mathematical symbols, while other fonts like Wingdings supply a series of decorative symbols instead of letters. OpenType Features Fonts that use the OpenType font file format may support a set of special features that can automatically replace certain glyphs with alternate versions, based on the context. Ligatures are special glyphs that can appear when two particular characters are next to each other, replacing the individual glyphs. For example, the letter pairing "fi" in a serif font is often replaced by a special ligature where the tittle (or dot) on the lowercase "i" is absent and replaced by the hood on the lowercase "f". In addition to standard ligatures, many OpenType fonts support other types of special glyphs that can appear automatically or when enabled in a word processor: Contextual alternates are similar to ligatures, appearing to combine two letters into a single shape. Instead of being a single glyph, contextual alternates provide different versions of letters that change based on adjacent letters, often to change the joining behavior in cursive fonts. Alternate number forms provide different styles of numbers for use in different contexts. For example, old-style numbers that are shorter and include descenders, or alternate tabular number forms that are monospaced to line up vertically. Some fonts include special glyphs for fractions beyond the most common set, or even automatically format numbers on either side of a slash character as numerators and denominators. Many fonts also include separate capital letter forms designed for use as small caps instead of simply shrinking the default capital letter forms. NOTE: Computer operating systems include ways to browse through a font's glyphs. In Windows, the Character Map can be found in the Start Menu in the Windows Accessories folder; in macOS, the Character Viewer panel can be opened by pressing control + command + space. Updated November 18, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/glyph Copy

GNU

GNU GNU (pronounced "g-new") is a free Unix-like operating system distributed by the Free Software Foundation. It is available in several different versions, but the most popular is the GNU/Linux system, which uses the Linux kernel. Since the GNU/Linux system is a popular version of Linux, it is often referred to as simply "Linux." However, GNU/Linux is technically a specific version of Linux developed by the GNU Project. Several GNU/Linux distributions are available, including BLAG, Dragora, gNewSense, Kongoni, Musix GNU+Linux, Trisquel, Ututo, and Venenux. While each version of GNU/Linux is based on the GNU system, each has a custom user interface and may include unique bundled applications. For example, Trisquel is designed for small business and home and educational purposes, while Musix GNU+Linux is designed primarily for audio production. GNU operating systems, software programs, and development tools, such as the GNU Compiler Collection (GCC), are distributed for free under the GNU General Public License (GPL). This license states that the software may be freely used, modified, and distributed. Therefore, all GNU software is freely available without a commercial license. Many freeware programs developed for other operating systems are now distributed under the GNU General Public License as well. The name "GNU" is a recursive acronym for "GNU's Not Unix." Updated July 5, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gnu Copy

Gnutella

Gnutella Gnutella is a file sharing network that allows users to send and receive files over the Internet. The first part of its name comes from the GNU General Public License, which originally allowed the source of the program to be made available to the public. The second part of the name comes from Nutella, a chocolate hazelnut spread, which apparently the developers ate a lot of while working on the project. The Gnutella network is a peer-to-peer (P2P) network, which allows users on different networks to share files. However, each user still must connect to an "ultrapeer," which is a server that lists files shared by connected users. This makes it possible to search for files across hundreds or even thousands of other computers connected to the network. Gnutella is a network protocol, not an actual program. Therefore, to access other computers on the Gnutella network, you must install a P2P program that supports Gnutella. Fortunately, many of these programs are available as shareware and can be downloaded from the Internet. Some popular Gnutella clients include Acquisition for the Mac and BearShare and Morpheus for Windows. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gnutella Copy

Golden Master

Golden Master A golden master, or GM, is the final version of a software program that is sent to manufacturing and is used to make retail copies of the software. The golden master follows several other stages in the software development process including the alpha, beta, and release candidate stages. The final release candidate (RC) becomes the "release to manufacturing" (RTM) version, which is also called the golden master. The term "golden master" has been used by Apple for many years, and is now used by many other software companies as well. It is also used in the video game industry to refer to the final version of games that are sent to replication facilities. When a program reaches the golden master stage, it is often said to have "gone gold." This means it won't be long before the product is available to consumers for retail purchase. NOTE: Golden master should not be confused with a "gold record," which refers to a music album that has sold 500,000 copies. Updated September 16, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/goldenmaster Copy

Goodput

Goodput When data is transferred over a communications medium, such as the Internet or a local area network (LAN), the average transfer speed is often described as throughput. This measurement includes all the protocol overhead information, such as packet headers and other data that is included in the transfer process. It also includes packets that are retransmitted because of network conflicts or errors. Goodput, on the other hand, only measures the throughput of the original data. Goodput can be calculated by dividing the size of a transmitted file by the time it takes to transfer the file. Since this calculation does not include the additional information that is transferred between systems, the goodput measurement will always be less than or equal to the throughput. For example, the maximum transmission unit MTU of an Ethernet connection is 1,500 bytes. Therefore, any file over 1,500 bytes must be split into multiple packets. Each packet includes header information (typically 40 bytes), which adds to the total amount of data that needs to be transferred. Therefore, the goodput of an Ethernet connection will always be slightly less than the throughput. While goodput is typically close to the throughput measurement, several factors can cause the goodput to decrease. For example, network congestion may cause data collisions, which requires packets to be resent. Many protocols also require acknowledgment that packets have been received on the other end, which adds additional overhead to the transfer process. Whenever more overhead is added to a data transfer, it will increase the difference between the throughput and the goodput. Updated July 14, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/goodput Copy

Google

Google Google is the world's most popular search engine. It began as a search project in 1996 by Larry Page and Sergey Brin, who were two Ph.D. students at Stanford University. They developed a search engine algorithm that ranked Web pages not just by content and keywords, but by how many other Web pages linked to each page. This strategy produced more useful results than other search engines, and led to a rapid increase in Google's Web search marketshare. The Google ranking algorithm was later named "PageRank" and was patented in September of 2001. In only a short time, Google became the number one search engine in the world. According to Google's website, the company's mission is to "organize the world's information and make it universally accessible and useful." While the Web search remains Google's primary tool for helping users access information, the company offers several other services as well. Some of these include: Froogle - price comparison shopping Image Search - search for images on the Web Google Groups - online discussion forums Google Answers - answers to questions based on a bidding system Google Maps - maps and directions Google Toolbar - a downloadable search tool Blogger - a free blogging service Gmail - Web-based e-mail with several gigabytes of storage AdWords - Advertising services for advertisers AdSense - Advertising services for Web publishers Google has become such a popular search engine that the term "Google" is now often used as a verb, synonymous with "search." For example, if you are looking for information about someone, you can Google that person using Google's search engine. To Google your own term or phrase, visit Google's home page. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/google Copy

Google Drive

Google Drive Google Drive is a service offered by Google that allows you to store and share files online. The service was launched on April 24, 2012 and provides 5 GB of free storage. Additional storage can be purchased for a monthly fee. The goal of Google Drive is to provide a central place to store your files online so that you can access them from anywhere. Additionally, you can access your Google Drive from multiple devices, since the software is available for Windows, Mac OS X, Android, and iOS platforms. The service also provides a web-based interface that allows you to organize your files and search for documents by filename or content. Besides online file storage, Google Drive provides tools for sharing files and collaborating on projects with other users over the Web. For example, instead of emailing large attachments, you can send links to the files from your Google Drive to one or more users. You can also use the web-based Google Docs applications to create or edit documents online. When you share a document with other Google Drive users, everyone can view and edit the document at the same time. Google Drive allows you to view over 30 file types directly in your web browser. These include Google's proprietary formats, as well as other popular file types, such as Adobe Photoshop and Illustrator documents. For more information about Google Drive's proprietary file types, view the Google Drive File Types article at FileInfo.com. Updated April 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/google_drive Copy

Gopher

Gopher Gopher was an early internet technology developed in 1991 at the University of Minnesota, named after the school's mascot, the Golden Gopher. It allowed users to search and retrieve information through a text-based interface, with content organized hierarchically (directories with files and folders). The system was based on a client-server model in which users accessed Gopher servers through a Gopher client. Unlike the modern web, which supports hyperlinks and multimedia content, Gopher was entirely text-based. Since browsing menus could be tedious when searching large directories, developers created a search tool called Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives). Veronica indexed multiple Gopher menus, allowing users to search for files rather than browsing dozens of servers within the Gopher network. The University of Minnesota began licensing the Gopher technology in 1993. However, around the same time, the web emerged with a more appealing graphical user interface. The rise of web-based search engines overshadowed Gopher, limiting its adoption. Today, Gopher retains a niche presence and is accessible from a few servers for historical purposes. Updated November 5, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gopher Copy

GPGPU

GPGPU Stands for "General-Purpose computation on Graphics Processing Units." GPGPU, or GPU computing, is the use of a GPU to handle general computing operations. Modern operating systems allow programs to access the GPU alongside the CPU, speeding up the overall performance. While GPUs are designed to process graphics calculations, they can also be used to perform other operations. GPGPU maximizes processing efficiency by offloading some operations from the central processing unit (CPU) to the GPU. Instead of sitting idle when not processing graphics, the GPU is constantly available to perform other tasks. Since GPUs are optimized for processing vector calculations, they can even process some instructions faster than the CPU. GPGPU is a type of parallel processing, in which operations are processed in tandem between the CPU and GPU. When the GPU finishes a calculation, it may store the result in a buffer, then pass it to the CPU. Since processors can complete millions of operations each second, data is often stored in the buffer only for a few milliseconds. GPU computing is made possible using a programming language that allows the CPU and GPU share processing requests. The most popular is OpenCL, an open standard supported by multiple platforms and video cards. Others include CUDA (Compute Unified Device Architecture), an API created by NVIDIA, and APP (Accelerated Parallel Processing), an SDK provided by AMD. Updated June 10, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gpgpu Copy

GPIO

GPIO Stands for "General Purpose Input/Output." GPIO is a type of pin found on an integrated circuit that does not have a specific function. While most pins have a dedicated purpose, such as sending a signal to a certain component, the function of a GPIO pin is customizable and can be controlled by software. Not all chips have GPIO pins, but they are commonly found on multifunction chips, such as those used in power managers and audio/video cards. They are also used by system-on-chip (SOC) circuits, which include a processor, memory, and external interfaces all on a single chip. GPIO pins allow these chips to be configured for different purposes and work with several types of components. A popular device that makes use of GPIO pins is the Raspberry Pi, a single-board computer designed for hobbyists and educational purposes. It includes a row of GPIO pins along the edge of the board that provide the interface between the Raspberry Pi and other components. These pins act as switches that output 3.3 volts when set to HIGH and no voltage when set to LOW. You can connect a device to specific GPIO pins and control it with a software program. For example, you can wire an LED to a GPIO and a ground pin on a Raspberry Pi. If a software program tells the GPIO pin to turn on, the LED will light up. Most computer users will not encounter GPIO pins and do not need to worry about configuring them. However, if you are a hobbyist or computer programmer, it can be helpful to learn what chips have GPIO pins and how to make use of them. Updated September 23, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gpio Copy

GPS

GPS Stands for "Global Positioning System." GPS is a satellite navigation system used to determine the ground position of an object. Many commercial products, such as automobiles, smartphones, and GIS devices, include built-in GPS receivers. For civilian use, the system can accurately pinpoint a device within 16 feet (5 meters); specialized devices for land surveying and other government use are accurate within a few inches. The GPS system includes more than 30 satellites (24 active and several backups) deployed in space about 12,000 miles (19,300 kilometers) above the earth's surface. They orbit the earth once every 12 hours at roughly 7,000 miles per hour (11,200 kilometers per hour). The satellites are evenly spread out so that four satellites are accessible via direct line-of-sight from anywhere on the globe. Each GPS satellite broadcasts a message that includes the satellite's current position, orbit, and exact time. A GPS receiver combines the broadcasts from multiple satellites to calculate its exact position using a process called triangulation. Three satellites are required to determine a receiver's location, though a connection to four satellites provides greater accuracy. For a GPS device to work correctly, it must first establish a connection to the required number of satellites. This process can take anywhere from a few seconds to a few minutes, depending on the strength of the receiver. For example, a car's GPS unit will typically establish a GPS connection faster than the receiver in a watch or smartphone. Most GPS devices also use some type of location caching to speed up GPS detection. By memorizing its previous location, a GPS device can quickly determine what satellites will be available the next time it scans for a GPS signal. Google Maps uses your device's GPS location to help you navigate NOTE: Since GPS receivers require a relatively unobstructed path to space, GPS technology is not ideal for indoor use. Smartphones, tablets, and other mobile devices often use other means to determine location, such as nearby cell towers and public Wi-Fi signals. This technology, sometimes referred to as the local positioning system (LPS), is often used to supplement GPS when a consistent satellite connection is unavailable. Updated March 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/gps Copy

GPU

GPU Stands for "Graphics Processing Unit." A GPU is a processor designed to handle graphics operations. This includes both 2D and 3D calculations, though GPUs primarily excel at rendering 3D graphics. History Early PCs did not include GPUs, which meant the CPU had to handle all standard calculations and graphics operations. As software demands increased and graphics became more important (especially in video games), a need arose for a separate processor to render graphics. On August 31, 1999, NVIDIA introduced the first commercially available GPU for a desktop computer, called the GeForce 256. It could process 10 million polygons per second, allowing it to offload a significant amount of graphics processing from the CPU. The success of the first graphics processing unit caused both hardware and software developers alike to quickly adopt GPU support. Motherboards were manufactured with faster PCI slots and AGP slots, designed exclusively for graphics cards, became a common option as well. Software APIs like OpenGL and Direct3D were created to help developers make use of GPUs in their programs. Today, dedicated graphics processing is standard – not just in desktop PCs – but also in laptops, smartphones, and video game consoles. Function The primary purpose of a GPU is to render 3D graphics, which are comprised of polygons. Since most polygonal transformations involve decimal numbers, GPUs are designed to perform floating point operations (as opposed to integer calculations). This specialized design enables GPUs to render graphics more efficiently than even the fastest CPUs. Offloading graphics processing to high-powered GPUs is what makes modern gaming possible. While GPUs excel at rendering graphics, the raw power of a GPU can also be used for other purposes. Many operating systems and software programs now support GPGPU, or general-purpose computation on graphics processing units. Technologies like OpenCL and CUDA allow developers to utilize the GPU to assist the CPU in non-graphics computations. This can improve the overall performance of a computer or other electronic device. Updated November 25, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gpu Copy

Graphics

Graphics A graphic is an image or visual representation of an object. Therefore, computer graphics are simply images displayed on a computer screen. Graphics are often contrasted with text, which is comprised of characters, such as numbers and letters, rather than images. Computer graphics can be either two or three-dimensional. Early computers only supported 2D monochrome graphics, meaning they were black and white (or black and green, depending on the monitor). Eventually, computers began to support color images. While the first machines only supported 16 or 256 colors, most computers can now display graphics in millions of colors. 2D graphics come in two flavors — raster and vector. Raster graphics are the most common and are used for digital photos, Web graphics, icons, and other types of images. They are composed of a simple grid of pixels, which can each be a different color. Vector graphics, on the other hand are made up of paths, which may be lines, shapes, letters, or other scalable objects. They are often used for creating logos, signs, and other types of drawings. Unlike raster graphics, vector graphics can be scaled to a larger size without losing quality. 3D graphics started to become popular in the 1990s, along with 3D rendering software such as CAD and 3D animation programs. By the year 2000, many video games had begun incorporating 3D graphics, since computers had enough processing power to support them. Now most computers now come with a 3D video card that handles all the 3D processing. This allows even basic home systems to support advanced 3D games and applications. Updated April 1, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/graphics Copy

Graymail

Graymail Graymail describes email messages that are generally unwanted, but do not fit the definition of spam. Unlike spam, graymail includes messages from mailing lists and newsletters that you have legitimately signed up to receive. Over time, these messages can begin to clutter your inbox and can easily be mistaken for spam. The term "graymail" was coined by the Microsoft Hotmail team in 2011, when the company introduced new methods of filtering incoming messages. Graymail differs from spam in the following ways: The email is solicited. You request to receive graymail by opting in, either directly or indirectly. For example, a direct method is subscribing to a mailing list. An indirect method is providing your email address when you register with an e-commerce website. The email is legitimate. Graymail messages are sent by reputable sources who value their relationship with the recipient. The messages usually contain an unsubscribe option, which is honored by the sender. The email content is targeted to specific users. Graymail messages generally contain content that is specific to your interests. While the emails may include text that is similar to spam messages, such as special offers and promotions, the offers are directed to you and other specific users. Based on Microsoft's research, newsletters and special offers make up the majority of messages in the average user's inbox. By identifying these messages as graymail, Hotmail is able to filter them appropriately. This includes moving newsletters to a specific "Newsletters" category and providing a "Schedule Cleanup" tool that moves or deletes outdated email promotions. Updated February 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/graymail Copy

Grayscale

Grayscale Grayscale is a range of monochromatic shades from black to white. Therefore, a grayscale image contains only shades of gray and no color. While digital images can be saved as grayscale (or black and white) images, even color images contain grayscale information. This is because each pixel has a luminance value, regardless of its color. Luminance can also be described as brightness or intensity, which can be measured on a scale from black (zero intensity) to white (full intensity). Most image file formats support a minimum of 8-bit grayscale, which provides 2^8 or 256 levels of luminance per pixel. Some formats support 16-bit grayscale, which provides 2^16 or 65,536 levels of luminance. Many image editing programs allow you to convert a color image to black and white, or grayscale. This process removes all color information, leaving only the luminance of each pixel. Since digital images are displayed using a combination of red, green, and blue (RGB) colors, each pixel has three separate luminance values. Therefore, these three values must be combined into a single value when removing color from an image. There are several ways to do this. One option is to average all luminance values for each pixel. Another method involves keeping only the luminance values from the red, green, or blue channel. Some programs provide other custom grayscale conversion algorithms that allow you to generate a black and white image with the appearance you prefer. While grayscale is an important aspect of digital images, it also applies to printed documents. When you select "Print," the print dialog box that appears may include a grayscale option. If you choose this option, the color information will be removed from the document before it is printed. As long as your printer has an individual black ink cartridge, when you print in grayscale, it should only use the black ink and none of the color cartridges. Therefore, the "Print in Grayscale" feature is useful if you just need to print a document for reference or don't need a color version. Updated April 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/grayscale Copy

Greenfield

Greenfield Greenfield is a term from the construction industry that refers to undeveloped land. In the IT world, greenfield describes a software project that is developed from scratch rather than built from an existing program. It is often contrasted with "brownfield," which describes software built from an existing program. Greenfield software development is generally more flexible than brownfield development since a new program does not need to fit a specific mold. For example, a greenfield word processor might provide a completely new user interface and may have features not available in any previous program. Additionally, greenfield software does not need to be backwards compatible with older versions of a program. There is no need to support legacy file formats or include previous features to meet end user expectations. While greenfield projects are open ended, developing software from scratch involves inherent risk. For example, there may not be as large of a market for a program as the developer expects. The interface may not be well-received and may need to be altered or redesigned to be more user-friendly. It make take several updates before a greenfield application is successful in the marketplace. Of course, greenfield programs that succeed often benefit from being a unique option for users until similar applications are developed. NOTE: The vast majority of software development is brownfield, since most major software releases are updates to existing programs. However, there has been a recent surge in greenfield development thanks to the new market of mobile apps. Updated December 18, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/greenfield Copy

Grep

Grep Grep is a command-line utility for Unix used for searching text documents. It can perform basic searches and advanced pattern matching using regular expressions. The grep utility is included with all major Unix distributions, including Linux and Mac OS X. Grep can be run by simply typing grep at the command prompt. The command requires specific parameters, including the string to be located and the filename of the file to be searched. For example, if you want to check if the file "readme.text" has the word "license" in it, you would type the following command: grep license readme.txt If the word is not found, grep will not produce any output. If the string is located in the document, grep will list each occurrence of the string. If you need to perform a more complex search, you can use a regular expression. For example, the command below will search the file "us.txt" and list all lines that begin with "Me" or end with "you" (case-sensitive). grep -E "^Me|you$" us.txt The "-E" parameter in the example above indicates the pattern is an "extended regular expression." By using this option, you can make sure grep processes your search pattern as a regular expression instead of a basic string. Grep also supports several other command-line options, which can be used to create more specific queries and customize the output. You can view these options and learn more about grep by typing "man grep at the Unix command prompt. NOTE: The name "grep" is an acronym that comes from the phrase, "Globally search a regular expression and print." Updated September 2, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/grep Copy

Grid Computing

Grid Computing Grid computing (also called "distributed computing") is a collection of computers working together to perform various tasks. It distributes the workload across multiple systems, allowing computers to contribute their individual resources to a common goal. A computing grid is similar to a cluster, but each system (or node) on a grid has its own resource manager. In a cluster, the resources are centrally managed, typically by a single system. Additionally, clusters are usually located in a single physical space (such as a LAN), whereas grid computing often incorporates systems in several different locations (such as a WAN). In order for systems in a computing grid to work together, they must be physically connected (over a network or the Internet) and run software that allows them to communicate. The software used in grid computing is called middleware since it translates the information passed from one system to another into a recognizable format. This allows the data computed by one node within the grid to be stored or processed by another system on the grid. Grid computing has many different scientific applications. For example, it is used to model the changes in molecular structures, analyze brain behavior, and compute complex physics models. It is also used to perform weather and economic simulations. Some companies also use grid computing to process internal data and provide services over the Internet. Cloud computing, for instance, is considered to be a subset of grid computing. Updated February 11, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/grid_computing Copy

GUI

GUI Stands for "Graphical User Interface" and is pronounced "gooey." It is a user interface that includes graphical elements, such as windows, icons and buttons. The term was created in the 1970s to distinguish graphical interfaces from text-based ones, such as command line interfaces. However, today nearly all digital interfaces are GUIs. The first commercially available GUI, called "PARC," was developed by Xerox. It was used by the Xerox 8010 Information System, which was released in 1981. After Steve Jobs saw the interface during a tour at Xerox, he had his team at Apple develop an operating system with a similar design. Apple's GUI-based OS was included with the Macintosh, which was released in 1984. Microsoft released their first GUI-based OS, Windows 1.0, in 1985. For several decades, GUIs were controlled exclusively by a mouse and a keyboard. While these types of input devices are sufficient for desktop computers, they do not work as well for mobile devices, such as smartphones and tablets. Therefore, mobile operating systems are designed to use a touchscreen interface. Many mobile devices can now be controlled by spoken commands as well. Because there are now many types of digital devices available, GUIs must be designed for the appropriate type of input. For example, a desktop operating system, such as OS X, includes a menu bar and windows with small icons that can be easily navigated using a mouse. A mobile OS, like iOS, includes larger icons and supports touch commands like swiping and pinching to zoom in or zoom out. Automotive interfaces are often designed to be controlled with knobs and buttons, and TV interfaces are built to work with a remote control. Regardless of the type of input, each of these interfaces are considered GUIs since they include graphical elements. NOTE: Specialized GUIs that operate using speech recognition and motion detection are called natural user interfaces, or NUIs. Updated April 3, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/gui Copy

GUID

GUID Stands for "Globally Unique Identifier." A GUID is a 128-bit number that uniquely identifies something on a computer system. Each GUID is generated algorithmically and provides an identification string, similar to a serial number, that no other object on any system is likely to have. Programs often use GUIDs to create distinct IDs for database records, user accounts, transactions, security tokens, hardware devices, and many other objects. A GUID consists of 32 hexadecimal digits arranged in groups (8-4-4-4-12) separated by hyphens. Below is a typical example of a GUID: 9AC1A6F1-4A87-4624-B5F5-B74D494E21E0 While each GUID is meant to be completely unique, that uniqueness does not come from any central authority — there's no big database keeping track of GUIDs to guarantee there aren't duplicates. Instead, an individual GUID is likely to be unique due to the gigantic address space allowed by 128 bits of data (more than 340 trillion trillion trillion possible values). You could generate a billion GUIDs per second, and it would still take more than a trillion years to run out of address space. The New-Guid command in Windows PowerShell creates a new GUID The GUID specification outlines several standard versions that each use several factors to increase the likelihood that the result is unique. Version 1 and 2 GUID generators use a timestamp and the device's MAC address as seed values. Version 3 and 5 generators create a GUID by combining a namespace identifier and name given to the object, then hashing the result using either MD5 or SHA-1 algorithms. Version 4 generators use a random or pseudo-random number generator to create GUIDs without including time- or device-specific information. Most programming languages have GUID-generating code libraries that developers can use in their programs, and GUID-generating apps and websites are commonly available. NOTE: GUIDs are also known as UUIDs, and the terms may be used interchangeably. Microsoft prefers to use the term GUID on their platforms, while most other companies use UUID. Updated August 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/guid Copy

H.264

H.264 H.264, also known as Advanced Video Coding (AVC), is a codec for compressing high-definition video. H.264-encoded video is used for multiple applications like streaming movies and television programs, video conferencing, and storing video on Blu-Ray discs. It requires roughly half the bitrate of an MPEG-2-compressed video at the same level of quality, offering HD (1920 x 1080) video at 4 to 8 Mbps. Like most other video codecs, H.264 uses lossy compression to discard unnecessary information from a video stream, shrinking the file size and bitrate of the resulting video. It uses a method known as inter-frame compression that breaks each frame into macroblocks, or 16x16 pixel squares. For each macroblock, the encoder analyzes the differences between the current frame and the previous one (known as the reference frame) and identifies the regions that have changed. The encoder then encodes only the differences between the frames. This way, scenes with little movement or a static background will need very little data per frame. H.264-compressed video includes more detail at the same bitrate compared to MPEG-2 compression H.264 video compression uses considerably less data than previous compression standards. However, it requires much more computing power to decode a video stream. Most modern CPUs and GPUs include built-in support for H.264 decoding. Even though the format supports resolutions up to 8K, most H.264 decoding software is limited to HD video. Higher-resolution 4K video, as well as HDR video using 10-bit color, instead uses the more efficient successor format H.265 to achieve similar levels of quality at an even lower bitrate. NOTE: H.264 is a codec, not a file format. H.264-encoded video is instead saved in a container format like .MKV or .MP4. Updated April 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/h264 Copy

H.265

H.265 H.265, also known as High Efficiency Video Coding (HEVC), is a video codec for compressing high-resolution digital video. It is a successor format to the older H.264 codec, offering better compression that results in a similar quality video at half the effective bitrate. It supports resolutions up to 8K UHD and deep color depths for HDR video. The H.265 codec uses inter-frame media compression to remove unnecessary data from a video stream. It compares each video frame to the previous frame and encodes only the changes between the two. It analyzes each frame by dividing it into blocks up to 64x64 pixels in size. These blocks, called coding tree units (CTUs), can be subdivided into smaller blocks to encode highly-detailed areas. H.265 uses variable-sized CTUs to more efficiently compress video than H.264 H.265 is one of the most common formats for downloading and streaming 4K video. Its higher compression ratio compared to H.264 means that 4K video files can take up less storage space or streaming bandwidth. For example, a 4K video stream using H.264 requires a bitrate of about 30 Mbps, while the same quality level using H.265 only requires 15 Mbps. It also supports 10- and 12-bit color depths by default, instead of as an optional feature, making H.265 a better choice for distributing HDR video. Since H.265 is more complex than H.264, decoding it requires more computing power. Older, less powerful devices may struggle to play back a video at all. However, most CPUs and GPUs made after 2016 include built-in hardware support for encoding and decoding H.265 video streams. The processors in video streaming devices designed for 4K content, like the Apple TV and Roku Ultra, are also optimized for H.265 video. Updated April 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/h265 Copy

Hack

Hack The word hack has two primary computer-related definitions. It often means breaking into a computer system to gain unauthorized access to its data. It may also refer to an inelegant but clever solution to a computing problem in a program's source code. Security Hacks In the context of computer security, a hack is an unauthorized intrusion into a computer system. A hacker is someone who hacks to plant malware on another computer, steal information from a database, or even just vandalize a website. There are many types of hacks that a hacker may use to gain entry into a system. One type of hack involves gaining access through a compromised user account. Once they identify a possible victim's account username, the hacker may try a brute-force attack that repeatedly attempts common passwords. They may use a packet sniffer to intercept traffic and capture unencrypted data, hoping to find a packet containing a password or other valuable data. They may also use a scanning tool to find open ports vulnerable to intrusion. Hackers may also use social engineering hacks to gain access to a system. Social engineering hacks don't use as much technical skill as scanning ports and intercepting web traffic, but instead use psychological manipulation and other tricks. For example, a phishing attack attempts to mislead a victim into divulging their information through a fake email message. Programming Hacks A programming hack is a nonstandard, often-clunky solution to a computer programming problem. It may use undocumented methods to perform a function, or even exploit a bug present elsewhere in a program. These sorts of hacks, also known as "kludges," are often less work than a more robust solution to the problem but may introduce new bugs or unexpected behavior. Updated April 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hack Copy

Hacker

Hacker By its original definition, a hacker is a skilled or clever computer expert who can solve technical problems using non-standard or undocumented means. The term is now generally used instead to refer to a security hacker, or someone skilled at gaining unauthorized access to a computer system. The term originally applied to a general subculture of early computer enthusiasts in the 1960s, 70s, and 80s. These hackers were hobbyists who found enjoyment in creatively pushing the limits of what they could accomplish on their computers. This sort of hacker subculture continues today, often seen in the open source software community, computer clubs, and events known as "hackathons." Security Hackers Now, the term "hacker" is most often used to refer to security hackers who illegally access computer systems, often for nefarious reasons. A hacker's methods of gaining entry to a system can be simple, like guessing or stealing login credentials. Skilled hackers may use more complex methods like exploiting bugs or implanting a virus in a system. They may also use social engineering or phishing to gain access. Not all hackers break into computer systems for nefarious reasons. "White Hat" hackers are also known as ethical hackers; they are security researchers hired by the operators of a system to break into it. They identify weaknesses, like zero-day exploits, to fix them and help fortify a system against other hackers. "Grey Hat" hackers are similar to White Hats — breaking into systems and reporting the vulnerabilities they discover, but without advance permission. Grey Hats generally perform this work to claim an offered payment, known as a "bug bounty." "Black Hat" hackers, meanwhile, are the hackers who break into systems for criminal purposes like stealing information or extorting payment via ransomware. Updated December 8, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hacker Copy

Half-Duplex

Half-Duplex Half-duplex is a type of communication in which data can flow back and forth between two devices, but not simultaneously. Each device in a half-duplex system can send and receive data, but only one device can transmit at a time. An example of a half-duplex device is a CB (citizens band) radio. The CB protocol, which is used by truckers, police officers, and other mobile personnel, allows users to communicate back and forth on a specific radio frequency. However, since the CB protocol only supports half-duplex communication, only person can speak at a time. This is why people communicating over two-way radios often say "over" at the end of each statement. It is a simple way of telling the recipient he or she can respond if necessary. Most communication protocols are designed to be full-duplex, rather than half duplex. Full-duplex communication allows computers and other devices to communicate back and forth at the same. While some computer networks can be set to half-duplex mode to limit bandwidth, full-duplex communication is much more common. NOTE: Half-duplex is sometimes abbreviated "HDX." Updated April 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/half-duplex Copy

Halftone

Halftone A halftone, or halftone image, is an image comprised of discrete dots rather than continuous tones. When viewed from a distance, the dots blur together, creating the illusion of continuous lines and shapes. By halftoning an image (converting it from a bitmap to a halftone), it can be printed using less ink. Therefore, many newspapers and magazines use halftoning to print pages more efficiently. Originally, halftoning was performed mechanically by printers that printed images through a screen with a grid of holes. During the printing process, ink passed through the holes in the screen, creating dots on the paper. For monochrome images, only one pass was needed to create an image. For multicolor images, several passes or "screens" were required. Today's printers are more advanced and typically do not contain physical screens. Instead, the halftone images are generated by a computer and the resulting image is printed onto the paper. By using a process called dithering, modern printers can randomize the dot patterns, creating a more natural appearance. This produces realistic images using far less ink than fully saturated ones. Like a standard bitmap, the quality of a halftone image depends largely on the its resolution. A halftone with a high resolution (measured in LPI), will have greater detail than a halftone with a low resolution. While the goal of halftoning is typically to create a realistic image, sometimes low resolutions are used for an artistic effect. Updated September 2, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/halftone Copy

HAN

HAN Stands for "Home Area Network," which is the same thing as a home network. It is a local area network (LAN) within a home that may include both wired and wireless devices. The central hub of a HAN is the router. In most cases, the router is connected to a modem, which communicates with an ISP. The modem provides an Internet connection to the router, which in turn provides Internet access to all connected devices within the home. Some modems include built-in routers, in which case the modem and router are the same device. Besides sharing the same Internet connection, devices on a HAN may communicate with each other. For example, a laptop can transfer files to a desktop computer over a home network. Similarly, a smartphone can cast videos to a smart TV if they are connected to the same router. In larger homes, a HAN may include repeaters to extend the range of the wireless signal. Both long distances and objects — especially cement walls — can cause Wi-Fi signals to degrade. A weak wireless connection may still be useable, but it may be inconsistent and produce slow data transfer rates. Wi-Fi repeaters or "mesh networks" solve this problem by boosting the signal in various areas of the home. Previous generations of HANs mainly provided Internet access to multiple computers. Modern HANs are the foundation of smart homes, which include many connected devices besides computers. Examples include smartphones, TVs, thermostats, lights, sensors, and home assistants, such as Alexa and Google Home. Updated April 24, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/han Copy

Handle

Handle In the online world, a handle is another word for a username. It can refer to the name you use in chat rooms, web forums, and social media services like Twitter. The term "handle" dates back to the 1970s and comes from Citizens Band radio (CB radio), a short-distance radio communications medium. CB radio users would identify themselves by unique nicknames, which became known as handles. When online chat became popular in the 1990s, the term "handle" transferred to the Internet and became a common way for users to identify themselves online. While handle and username are often used synonymously, they do not always mean the same thing. For example, when you choose a username for a secure website like a bank or investment site, it is not necessarily your handle. This is because the username is intended to be private and used in combination with a password to create your login. Handles, on the other hand, are public usernames that can be used to identify people online. The most popular web service to use handles is Twitter. In fact, Twitter usernames are often called "Twitter handles." You can reference other users in a tweet using "Mentions" or the "@reply" feature. To mention another Twitter user in your post, simply type an at symbol (@) immediately before the user's handle. A link to the user's profile will show up in the published tweet and the user will be notified that you have mentioned or replied to him or her. Updated April 8, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/handle Copy

Handshake

Handshake In the real world, a handshake is a customary greeting between two people. Similarly, a computer handshake serves as a greeting between two computer systems. It is commonly used to initialize a network connection between two hosts. A computer handshake may be completed between any two systems that communicate with each other on the same protocol. The two systems may be a client and server or simply two computers on a P2P network. The handshake confirms the identities of the connecting systems and allows additional communication to take place. Handshaking over a network is commonly called a 3-Way Handshake or "SYN-SYN-ACK." A successful handshake involves seven steps: Host A sends a synchronize (SYN) packet to Host B. Host B receives Host A's SYN request. Host B sends a synchronize acknowledgement (SYN-ACK) message to Host A. Host A receive's Host B's SYN-ACK message. Host A sends an acknowledge (ACK) message to to Host B. Host B receives Host A's ACK message. The connection between the two systems is established. When a system initiates a handshake, there are three possible outcomes: No response – If the system receiving the handshake is not available or does not support the protocol the initiating system uses, it may not respond to the request. Connection refused – The system receiving the handshake is available and understands the request, but denies the connection. Connection accepted – The system receiving the handshake is available, receives the request, and accepts the connection. The third outcome listed above is the only the one in which the handshake is completed. If a handshake is successful, the two systems can begin communicating and transferring (data) over the established protocol. Examples of protocols that use handshaking include TCP, TLS, and SSL. Updated October 22, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/handshake Copy

Haptics

Haptics Haptics, also called haptic technology, are mechanisms that produce small vibrations that help an electronic device provide tactile feedback. Smartphones, wearable devices, video game controllers, and laptop touchpads often contain haptic technology. It can create the illusion that a device is reacting to your touch or even pushing back against it. Many devices use haptics to simulate clicks and taps when you interact with a touchscreen, touchpad, or other input device. For example, when you type on a smartphone's on-screen keyboard, it may slightly vibrate every time you tap a letter. This tactile feedback simulates the feeling of pressing a real key and lets you know that the smartphone recognized your input. Haptic feedback is also used for silent notifications, especially on smartwatches — notifying you of a new message by quietly tapping your wrist instead of playing a sound. Early forms of haptic feedback were present in video game controllers, like the Nintendo 64's optional Rumble Pak or the Sony PlayStation's DualShock controller. These early technologies, often called "force feedback," created vibrations by rapidly spinning an unbalanced weight. These controllers were bulky and could only provide a rough rumble effect. Modern haptics instead produce vibrations using small electromagnetic coils and spring-loaded weights. These coils are significantly smaller and can create far more precise movements than spinning weights. Updated February 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/haptics Copy

Hard Copy

Hard Copy A hard copy is a physical copy of a document, image, or other digital data. It generally refers to a printed copy of a text file or photograph. Since hard copies of documents are no longer editable, they can be considered permanent versions. There are many reasons why a hard copy of a document is needed. First, printing a hard copy creates a physical backup of the document at that time. It can be stored separately and referenced later. It can be delivered physically and be viewed by anyone easily since no digital device is required. Many tax and legal filings require a hard copy of a document. Conversely, a soft copy is a digital version of a file that can be opened and edited by software. Unlike a hard copy, which is permanent, a soft copy is editable and can be changed at any time. It can also be deleted by accident, which makes having a hard copy as a backup a good idea in many cases. Updated November 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hardcopy Copy

Hard Disk

Hard Disk A hard disk is a type of data storage device that stores data on a magnetically-charged spinning platter. It is a non-volatile storage medium that continues to store data even when disconnected from power. A hard drive may contain a single hard disk platter, or it may include several platters stacked on top of each other in a spindle. Hard disk platters are made from glass, aluminum, or ceramic material and covered with a thin magnetic coating. This coating is made from a cobalt alloy that forms microscopic grains on the disk surface that can each hold a magnetic charge, representing binary data — either a 0 or 1. These grains are so densely packed that some hard disk platters can store a terabyte of data per square inch. Stacking several hard disk platters into a spindle allows a single hard drive to hold more than a dozen terabytes. A thin actuator arm sits along the top and bottom of each platter, detecting and changing the magnetic charges of each grain on the surface to read and write data to the disk. Since hard disks involve moving parts, they cannot read or write data as fast as SSDs. They are more susceptible to damage. For example, if an actuator arm were to come in contact with one of the platters as it spins, it would scratch the surface and damage the disk. NOTE: The terms "hard disk," "hard drive," and "hard disk drive" are often used interchangeably. "Hard disk" refers to the platters that store data, while "hard drive" and "hard disk drive" refer to the larger device that contains the disk platters, controller chips, and other circuitry required to interface with the computer's motherboard. Updated July 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hard_disk Copy

Hard Drive

Hard Drive A hard drive (sometimes called a "hard disk drive") is a type of non-volatile data storage device. Hard drives are one of the most common forms of data storage, along with solid-state drives (SSDs). They are often mounted inside a computer and connected directly to the motherboard, although external hard drives in separate enclosures are common backup storage devices. Solid-state drives have replaced hard drives as the primary storage device in most computers, but hard drives are still commonly used as secondary storage. Hard drives store binary data magnetically on several spinning platters inside the drive. Each platter has a magnetic coating that can hold billions of positive and negative magnetic charges. Each separate magnetic charge represents a single bit, either 1 or 0. As the platters spin (typically at either 5,400 or 7,200 revolutions per minute), a small actuator arm moves across the platters reading and writing these magnetic charges. The platters retain these charges even when the drive is disconnected from its power source, keeping the data intact. ⚠ Because they store data magnetically, hard drives are vulnerable to data loss if placed near a strong magnet. When compared to solid-state drives, hard drives have some technical drawbacks. First, because they have moving parts, hard drives may mechanically fail if the platters or actuator arm are damaged. They also read and write data much slower than SSDs. However, hard drives are available at much higher capacities — 10 terabytes or more, as of 2023 — at a much lower cost-per-terabyte than SSDs, so they are still useful in situations where total disk space is more important than speed. Updated January 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hard_drive Copy

Hard Token

Hard Token A hard token, or hardware token, is a piece of hardware that authenticates a user in a multi-factor authentication system. Hard tokens can take several forms, including small USB tokens, smart cards, or dedicated password-generating fobs. Hard tokens often work alongside other authentication methods like a username and password, but some systems may use them as the only necessary authentication method. Hard tokens come in several different forms. The most common type of hard token is a disconnected token, also known as an OTP token. These tokens do not plug into a computer to provide authentication but instead have a small screen that displays a one-time-use passcode when you click a button on the device. You can use this passcode as part of an MFA system, similar to a soft token authentication app. Another common form of hard token is a connected token, which plugs into a computer's USB port and can authenticate a user using one of several methods. Some USB tokens can insert a cryptographically-generated passcode whenever you press a button on the device and work without extra drivers or software. Others use a combination of private and public encryption keys to answer a cryptographic challenge issued during the login process. These tokens require compatible software (now built into most operating systems and web browsers) and the support of the service you're logging into. Several YubiKey USB tokens in multiple sizes and form factors Hard tokens are also often used to increase security when logging into a local computer system. Smart keycards can act as hard tokens; some require physical contact by placing a chip in a reader, and others use RFID chips to authenticate through proximity. Some specialized systems use wireless fobs that contain NFC or Bluetooth LE radios that can unlock a computer when the user is nearby and lock it again once they walk away. Updated June 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hard_token Copy

Hardware

Hardware A computer's hardware consists of its physical parts, including its internal pieces and connected external devices. Hardware components perform a computer's tasks like calculating data, storing information, processing input, and providing output. Any part of a computer that you can physically touch is hardware. All hardware devices, whether internal or external, include chips on a circuit board to perform a function. All hardware also requires a way to interface with the rest of the computer, usually by connecting to a port, socket, or wireless radio. After that, pieces of hardware will include other parts that help them fulfill their function, like buttons, sensors, protective cases, or even cooling fans to prevent overheating. Internal Components Internal hardware parts may also be called "components." These consist of everything inside a computer's case, including the following examples: A motherboard is the main system board every other component connects to, consisting of a large circuit board with many integrated chipsets and controllers. Motherboards also include sockets, slots, and ports that you can use to connect new components. A CPU is the brain of a computer that processes instructions and controls the rest of the hardware. Memory, or RAM, temporarily stores information for the CPU to process. Storage devices like hard drives and solid state drives hold the computer's applications and files. Unlike memory, which temporarily holds data, storage devices keep data even after the computer is powered off. Controller cards and expansion cards plug into special slots on a motherboard to expand a computer's capabilities. Examples include more powerful video cards, additional network interfaces, and extra ports. A power supply converts the alternating current from a wall socket into a direct current that powers a computer's components. External Peripherals External hardware devices, meanwhile, are usually called "peripherals." These include everything that connects to a computer by one of its ports or plugs, including these common examples: Monitors display the output of a computer's video card to provide a graphical user interface that someone can use to monitor and control the computer. Keyboards and mice accept input from the person operating the computer, allowing them to enter text, issue commands, and manipulate objects displayed on the screen. Speakers and headphones play the sounds generated by a computer. Printers allow a person to print a document from their computer onto paper, while scanners allow them to create a digital copy of a document or photograph. Multi-function printers that combine these functions are now common. External storage devices store data just like internal storage devices do, but you can easily disconnect one from one computer and reconnect to another to move data between computers. You can add new hardware to a computer to extend its functionality. You need to turn the computer off before installing new internal components, but external peripherals are often plug-and-play. Most hardware requires drivers — data files containing instructions for the computer's operating system on how to control the new hardware — either provided on a disc bundled with the hardware or available for download from the manufacturer's website. NOTE: While hardware refers to a computer's components, software refers to the programs and applications that run on a computer. Updated December 21, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hardware Copy

Hash

Hash A hash is a function that converts one value to another. Hashing data is a common practice in computer science and is used for several different purposes. Examples include cryptography, compression, checksum generation, and data indexing. Hashing is a natural fit for cryptography because it masks the original data with another value. A hash function can be used to generate a value that can only be decoded by looking up the value from a hash table. The table may be an array, database, or other data structure. A good cryptographic hash function is non-invertible, meaning it cannot be reverse engineered. Since hashed values are generally smaller than the originals, it is possible for a hash function to generate duplicate hashed values. These are known as "collisions" and occur when identical values are produced from different source data. Collisions can be resolved by using multiple hash functions or by creating an overflow table when duplicate hashed values are encountered. Collisions can be avoided by using larger hash values. Different types of compression, such as lossy image compression and media compression, may incorporate hash functions to reduce file size. By hashing data into smaller values, media files can be compressed into smaller chunks. This type of one-way hashing cannot be reversed, but it can produce an approximation of the original data that requires less disk space. Hashes are also used to create checksums, which validate the integrity of files. A checksum is a small value that is generated based on the bits in a file or block of data such as a disk image. When the checksum function is run on a copy of the file (such as a file downloaded from the Internet), it should produce the same hashed value as the original file. If the file does not produce the same checksum, something in the file was changed. Finally, hashes are used to index data. Hashing values can be used to map data to individual "buckets" within a hash table. Each bucket has a unique ID that serves as a pointer to the original data. This creates an index that is significantly smaller than the original data, allowing the values to be searched and accessed more efficiently. Updated April 21, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hash Copy

Hashtag

Hashtag A hashtag is a number symbol (#) used to label keywords in a tweet. The name "hashtag" was coined by Twitter and combines the word "hash" (another name for the number symbol) and "tag," since it is used to tag certain words. In order to tag a keyword in a Twitter post, simply type a number symbol (Shift+3) immediately before the word. For example, you can tag the word "tech" in a tweet by typing "#tech." Twitter automatically turns hashtagged words into links to a dynamic feed. This feed is updated in real-time and lists all recent tweets containing the same hashtag. When you post a tweet with a hashtag, your tweet will show up in the public feed. Besides clicking on hashtags within tweets, you can also search for hashtags using Twitter's search feature. Hashtags are used to categorize tweets, since all tweets with the same hashtag are related. Therefore, searching for hashtags is a good way to monitor hot topics or trends. For example, #marchmadness is popular during the March NCAA tournament, while #election might be popular during political elections. Company names, such as #apple and #microsoft, are common hashtagged terms and may be used when people comment on new product releases. As people post tweets with the same hashtag, the corresponding feed can easily turn into a discussion. A hashtag can be any word or combination or words and can also include numbers. You can use trending hashtags in your tweets or make up your own. However, it is best to use popular hashtags if you want others to see your tweet. Each Twitter post can include multiple hashtags, though Twitter discourages spamming tweets with hashtags and recommends using no more than three hashtags per tweet. It is common to include hashtags at the end of a tweet, but you can hashtag any word by simply adding a number symbol in front of it. Updated March 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hashtag Copy

HD

HD Stands for "High Definition." HD describes any video with a resolution greater than standard definition (SD) video. While "high-definition" has a broad scope, it can also refer to a specific HD resolution, such as 1920x1080 pixels. Before HDTV was introduced in the late 1990s, the most common video resolutions were 640x480 (NTSC) and 720x576 (PAL and SECAM). The NTSC (National Television Standards Committee) format was standard in the U.S. and Japan, while Europe and other parts of the world used PAL (Phase Alternating Line) or SECAM (Sequential Color and Memory). These resolutions became known as "standard definition" (SD) only after "high definition" (HD) was introduced. When HD devices first hit the market, there were three standard versions: 720p - 1280x720 progressive scan 1080i - 1920x1080 interlaced 1080p - 1920x1080 progressive scan The HD video formats listed above each have a 16:9 aspect ratio, which is wider than the 4:3 aspect ratio of standard definition. 720p, also called "HD ready," was the standard DVD video format, while 1080i and 1080p, called "Full HD," were most often used HD broadcasts. Since early HD televisions only supported 1080i, the standard HD broadcast format was 1080i. It was not until Blu-ray discs became available that 1080p became standard. Since "HD" refers to any resolution greater than standard definition, it also encompasses higher resolutions like 4K and 8K. However, to avoid ambiguity, 4K resolution (3840x2160) is typically called Ultra HD or UHD, while HD most often describes 1920x1080 video. Updated February 23, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hd Copy

HDD

HDD Stands for "Hard Disk Drive." "HDD" is often used interchangeably with the terms "hard drive" and "hard disk." However, the term "hard disk drive" is technically the most accurate, since "hard drive" is short for "hard disk drive" and the "hard disk" is actually contained within the hard disk drive. The HDD is the most common storage device used to store data. Most computers made in the 1980s, 1990s, and 2000s include an internal hard disk drive. The first PCs had hard drives that were less than one megabyte in size, while modern hard drives may contain several terabytes of data. Some desktop computers have multiple internal hard drives, and external hard drives are often used for additional storage or backup purposes. HDDs are non-volatile, meaning they do not need electrical power to maintain their data. They store data magnetically using a series of spinning platters, or magnetic discs, which record individual bits as ones and zeroes. Data is recorded using a write head that writes bits onto the disk. A read head is used to read data from the disk. Both of these heads are located at the end of a tapered metal component called an actuator arm. The appearance is similar to a turntable, but the hard disk spins hundreds of times faster than a record. Additionally, an HDD reads data digitally, while a record player captures an analog signal. Hard disk drives come in many shapes and sizes, but 3.5 inch models are most common in desktop computers, and 2.5 inch models are usually found in laptops. A typical consumer HDD has a rotational speed of 7200 RPM, but some high-end hard disk drives run as fast as 15,000 RPM. Laptop HDDs typically run at either 4800 or 5400 RPM. Even at the fastest rotational speed, hard drives are still limited by the "seek time" of the drive head. Therefore, SSDs, which do not have a drive head, have become a popular high-performance alternative to HDDs in recent years. Updated November 8, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hdd Copy

HDMI

HDMI Stands for "High-Definition Multimedia Interface." HDMI is a trademark and brand name for a digital interface used to transmit audio and video data in a single cable. It is supported by modern audio/video equipment, such as 4K televisions, HDTVs, audio receivers, DVD and Blu-ray players, cable boxes, and video game consoles. While other types of A/V connections require separate cables for audio and video data, a single HDMI cable carries the audio and video streams together, eliminating cable clutter. For example, an analog component cable connection requires three cables for video and two for audio, totaling five cables in all. The same information can be transmitted digitally using one HDMI cable. Because HDMI is a digital connection, HDMI cables are less prone to interference and signal noise than analog cables. Also, since most components, such as DVD players and digital cable boxes process information digitally, HDMI eliminates the digital-to-analog and analog-to-digital conversion other interfaces require. Therefore, HDMI typically produces the best quality picture and sound compared to other types of connections. HDMI cables are often more expensive than analog cables since they cost more to manufacture. But it is important to remember that a single HDMI cable can replace multiple analog cables. The single all-purpose connection simplifies setup and makes it easy to connect and disconnect devices. It also supports digital commands, allowing devices to communicate with each other. For example, if your TV is connected via HDMI to a receiver, the TV can automatically turn the receiver on and off when the TV is turned on and off. It can also synchronize the volume between the TV and receiver. Modern HDMI receivers allow you to visually configure the receiver settings using your TV as the interface. NOTE: HDMI is a trademark owned by HDMI Licensing Administrator, Inc. (HDMI LA) that serves as an indicator of source for HDMI LA’s brand of digital interfaces used to connect high-definition equipment. More information is available at HDMI.org. Updated August 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hdmi Copy

HDR

HDR Stands for "High Dynamic Range." HDR is a technology that improves the range of color and contrast in a digital image. It may be used for both photos and videos, though the implementations are different. HDR Photography An HDR photo is created by capturing multiple images with different exposures at the same time. Instead of taking a single photo, the camera captures three or more photos in quick sequence using different exposure values. For example, a digital camera might take one photo with a -1 exposure setting, one with the default (automatic) exposure, and one with a +1 exposure setting. The camera (or smartphone) then combines the photos to create a single image. HDR image processing is useful when taking photos of scenes with high contrast. For example, if you are standing in front of window during a sunny day, the backlight might cause the automatic exposure to be too low, making your face appear dark. If you adjust the exposure to brighten your face, the window will become too bright — possibly completely white. By combining multiple exposures, these levels can be averaged out, allowing your face to be visible in the photo while not overexposing the light behind you. Many digital cameras and smartphones, such as the iPhone, have an HDR option. You can often choose from from three different HDR settings — OFF, ON, or Auto, which only takes an HDR photo in high-contrast situations. Most cameras will also capture a regular photo at the same time, so you can choose between the HDR or regular photo for each HDR image captured. HDR Video HDR video is captured in a similar way to an HDR photo. Each scene is recorded with different exposures (or different ISO settings) at the same time. These scenes are blended together to create a single recording. This typically results in a brighter video, though it can also create a grayish effect that may look unrealistic. For this reason, HDR recordings are often modified in post-production before being published. In order to display HDR video correctly, the output device (such as a TV or computer monitor) must also support HDR. An HDR-capable TV, for example, must support certain video output standards. These include 10-bit color and at least 90% of the DCI-P3 color space, a range of RGB colors defined by the American film industry. An HDR display must also support specific HDR formats, such as HDR10, Dolby Vision, and Hybrid Log Gamma (HLG). Most HDR televisions support all of these standards and can be updated using firmware updates that can be downloaded from the Internet. Updated September 15, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hdr Copy

HDTV

HDTV Stands for "High Definition Television." HDTV is a high-quality digital video standard for broadcast, cable, and satellite television. It replaced older broadcast standards, now known as standard definition or SDTV. High-resolution HDTV images have five times as many pixels as SDTV, providing a sharper, more detailed image with richer colors. Unlike SDTV, which was broadcast using analog signals, HDTV broadcasts are fully digital. Lossy digital video compression using the H.264 codec allows broadcasters to carry a much higher-resolution image over the same broadcasting medium. Digital signals are also less susceptible to interference, although when interference is present, it typically results in a dropped signal instead of a noisy picture. The HDTV standard defines three video formats — 1280x720 progressive scan (720p), 1920x1080 interlaced (1080i), and 1920x1080 progressive scan (1080p); for comparison, SDTV only displayed interlaced images at either 720x480 or 720x576 (depending on the region). HDTV also uses a 16:9 aspect ratio that is wider and more cinematic than the 4:3 aspect ratio of SDTV. HDTV supports up to five audio channels (with a sixth low-frequency subwoofer channel) compared to SDTV's two channels. The relative resolution of an SDTV image compared to HDTV resolutions While HDTV broadcasts started in many parts of the world in the 1990s, HDTV did not see widespread adoption until the mid-2000s. When these broadcasts first began, HDTVs used the same CRT display technology as standard-definition TVs. By 2010, most TVs sold were HDTV-compatible LCD sets. As technology improved, the higher-resolution UHD standard was introduced, which supports higher resolutions (like 4k and 8k), better audio, and expanded dynamic range. However, since UHD requires significantly more bandwidth, HDTV resolutions are still more common for TV broadcasts over cable, satellite, and airwaves. Updated November 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hdtv Copy

HDV

HDV Stands for "High-Definition Video." HDV is a format designed to record high-definition digital video on standard DV and MiniDV cassette tapes. It was developed and released in 2003 by the HDV consortium, which includes JVC, Sony, Canon, and Sharp. HDV provided an upgrade from standard-definition DV video for personal and low-end professional use. HDV video cameras can record widescreen (16:9) video at either 720p or 1080i. A typical DV cassette stores up to 120 minutes of video at 720p or 80 minutes at 1080i, while a standard MiniDV cassette has roughly half that capacity. The HDV format uses MPEG-2 video compression to store data more efficiently than DV, allowing digital video cassettes originally designed for SD video to store HD footage. However, MPEG-2 uses temporal compression that works across multiple frames instead of compressing each frame individually (as in DV); this makes frame-accurate editing of HDV video more complicated and prone to errors. MPEG-2 compression also means that the video source starts compressed directly from the camera, which can introduce artifacts when the edited video is re-compressed. A Sony HDR-H9C video camcorder capable of recording HDV video to MiniDV cassettes While HDV was a popular format supported by many digital camcorders in the 2000s and 2010s, it is now effectively obsolete outside of niche markets. Professional video cameras now record at 4k and 8k resolutions and save video directly to much smaller flash memory cards or internal SSDs. Meanwhile, most consumers carry smartphones in their pockets capable of recording much higher-quality video than HDV (with increased bitrates, higher frame rates, and even high dynamic range) without requiring anyone to juggle multiple cassette tapes. You can also watch, edit, and share videos directly from the phone without needing to transfer multiple DV tapes to a computer for editing. Updated December 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hdv Copy

Header

Header In computing, the term "header" can refer to a number of different things. Some of the most common uses include 1) a document header, 2) a webpage header, and 3) a file header. 1. Document Header Many word processing programs allow you to add a header and/or footer to each page. The header is a small area at the top of the document, while the footer is located at the bottom. Document headers are often used to display the document title or company name at the top of each page. By default, the header content is the same on all pages, so when you edit the header on one page, it will update on all the other pages as well. If you break a document up into sections, you can specify different headers for each section. Word processors provide different ways of adding and editing header content. Several programs allow you to select View → Headers and Footers to display the header and footer content. Once visible, you can type new content inside each section. Most version of Microsoft Word allow you to simply double-click in the header area to activate the header and add or edit text. If you want to change the height of the header, you can open the Document Formatting window and modify the margins. 2. Webpage Header The header of a webpage typically includes the company or organization's logo, as well as the main navigation bar. This section, which resides at the top of each webpage, is often part of a template and therefore is the same across all pages within a website or section of a website. HTML 5 even includes a tag that developers can use to specify the header section of each webpage. NOTE: A webpage header should not be confused with the section in the HTML, which includes the page title, meta tags, and links to referenced files. 3. File Header A file header is a small amount of data at the beginning of a file. File headers vary between file formats, but they generally define the content of the file and list specific file attributes. For example, the file header of a JPEG image file may include the image format, color profile, and application that created the file. An MP3 audio file may include the song name, tagging format, and compression information. You can view a file's header by dragging the file icon to a text editor and reading the first few lines. While binary files may contain a lot of garbled characters, the header information is often still readable. Updated October 2, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/header Copy

Headphones

Headphones Headphones are small speakers that can be worn in or around your ears. Traditional headphones have two ear cups attached by a band that is placed over your head. Smaller headphones, often called earbuds or earphones, are placed inside the outer part of your ear canal. Like speakers, headphones contain transducers that convert an audio signal into sound waves. Headphones that connect to an analog audio port (such as a 3.5 mm audio jack) process analog audio. Headphones that connect to a digital port, such as a USB or Lightning port process digital audio. Digital headphones must also include a digital-to-analog converter, or DAC, in order to produce the analog output. While headphones function the same way as standalone speakers, their output is significantly different. Speakers must generate enough amplitude for the sound to be audible from far away. Headphones only need to send the sound waves a few millimeters to your ear drums. Therefore headphone components are much smaller and often more precise than speaker components. Even though the output is much lower than standalone speakers, the close proximity of headphones to your ears can make the volume sound very loud. Most headphones even support the full 20 hertz to 20,000 Hz frequency range. However, the bass is less noticeable since they don't move as much air. Headphones come in many shapes and sizes, but a few common types are listed below. In-ear - earphones that fit snuggly in your ear canal; typically contain rubber ends that help give them a secure fit. Earbuds - earphones that rest inside the edge of your ear; often included with smartphones and portable media players. On-ear - headphones that rest on your ears but don't encompass the whole ear; helps isolate outside sounds. Over-ear - headphones that wrap around your ears; also called "around-ear" or "circumaural" headphones; come in open and closed versions. Open versions allow all outside noise, while closed versions limit external sounds. Noise-cancelling - creates an extra quiet listening environment by canceling outside noise; contains a microphone that detects external sounds and sends the opposite waveform to your ear to "cancel" the noise; available in all types of form factors. Updated September 3, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/headphones Copy

Heap

Heap A heap is a data structure made up of "nodes" that contain values. A typical heap has a root node at the top, which may have two or more child nodes directly below it. Each node can have two or more child nodes, which means the heap becomes wider with each child node. When displayed visually, a heap looks like an upside down tree and the general shape is a heap. While each node in a heap may have two or more child nodes (also called "children"), most heaps limit each node to two children. These types of heaps are also called binary heaps and may be used for storing sorted data. For example, a "binary max heap" stores the highest value in the root node. The second and third highest values are stored in the child nodes of the root node. Throughout the tree, each node has a greater value than either of its child nodes. A "binary min heap" is the opposite, where the root node stores the lowest value and each node has a lower value than its children. In computer science, heaps are often drawn as simple diagrams. However, actually storing data in a heap is more complex. In order to create a heap, programmers must write individual algorithms for inserting and deleting data. The values inserted into a heap are typically stored in an array, which can be referenced by a program. Since the data in a heap is already sorted, it provides an efficient means of searching for specific values. NOTE: "The heap" is also a programming term that may be used to describe dynamically allocated memory. This block of memory can be accessed by active applications. Since memory in the heap is allocated dynamically, it may grow or shrink depending on how much memory is being used. Updated August 2, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/heap Copy

Heartbleed

Heartbleed Heartbleed is a security hole in OpenSSL that was discovered by the Finnish security firm Codenomicon and publicized on April 7, 2014. OpenSSL is the encryption technology used to create secure website connections over HTTPS, establish VPNs, and encrypt several other protocols. Since OpenSSL is used by roughly two-thirds of web servers, the vulnerability is considered one of the most significant security holes discovered since the beginning of the web. How does Heartbleed work? The Heartbleed exploit takes advantage of the initial communication between the client and server. This preliminary step is commonly called a "handshake," though OpenSSL provides a variation called a "heartbeat." The heartbeat is used to establish a secure connection, but the data transmitted during the heartbeat is not sent securely. By sending false information to a server, a hacker can retrieve 64 kilobyte chunks of data from the server's cache. While this is a small amount of data, it is enough to contain a username, password, or other confidential information. By making several requests in a row, a hacker can potentially capture large amounts of private data cached in a server's memory. The Heartbleed bug is specific to OpenSSL 1.0.1 through 1.0.1f and version 1.0.2-beta1. Other versions of OpenSSL and other types of TLS (transport layer security) implementations are not affected. After the bug was made known on April 7, many web servers were patched immediately with version 1.0.1g. However, it is unknown how many servers were affected and how many still are using the vulnerable version of OpenSSL. How does Heartbleed affect me? It is unlikely that you are directly affected by the Heartbleed bug. While the security hole went undetected for two years, there is little evidence that the exploit has been widely used. Still, to be safe, you can protect yourself by updating your passwords for website logins, email accounts, and other online services. Updated April 11, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/heartbleed Copy

Heat Sink

Heat Sink A computer's CPU may perform millions of calculations every second. As the processor continues to work at a rapid pace, it begins to generate heat. If this heat is not kept in check, the processor could overheat and eventually destroy itself. Fortunately, CPUs include a heat sink, which dissipates the heat from the processor, preventing it from overheating. The heat sink is made out of metal, such as a zinc or copper alloy, and is attached to the processor with a thermal material that draws the heat from away the processor towards the heat sink. Heat sinks can range in size from barely covering the processor to several times the size of the processor if the CPU requires it. Most heat sinks also have "fins," which are thin slices of metal that are connected to the base of the heat sink. These additional pieces of metal further dissipate the heat by spreading it over a much larger area. A fan is often used to cool the air surrounding the heat sink, which prevents the heat sink from getting too hot. This configuration is referred to as a heat sink and fan or HSF combination. While heat sinks are used in nearly all computer CPUs, they have become commonplace in video card processors, or GPUs, as well. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/heatsink Copy

HEIF

HEIF Stands for "High Efficiency Image File Format." HEIF is a versatile image container file format designed for digital photographs and other raster images. It supports multiple image compression codecs, although it primarily uses HEVC to compress images. The MPEG group introduced the HEIF format in 2015, and it is now commonly used by digital cameras and smartphones to save photos. HEIF is more efficient than the older JPEG format, compressing images to roughly half the file size of a JPEG image at the same image quality. The HEIF file format introduces new features not supported by the JPEG format that make it more useful and versatile. It supports very high color depths of up to 16 bits per channel (which allows for more than 281 trillion colors), compared to JPEG's maximum of 8 bits per channel. That extra depth makes HEIF ideal for high dynamic range images. HEIF files can embed EXIF metadata in photos to record camera information, as well as XMP metadata to save postprocessing details. A HEIF image in Apple Photos with its metadata visible in a popup Since it is a container format, a HEIF file can contain more than one image. For example, a camera can capture multiple photos in burst mode and save them all as a single HEIF file; other uses include capturing multiple exposures with different settings or a stereo image of a 3D scene. It can also add modification layers that save extra image data, like an alpha transparency layer or a depth map that records how far different parts of the image are from the camera. Modification layers also allow non-destructive image editing, saving cropping, rotation, color adjustments, and other edits on their own layers. You can adjust modification layers independently or remove them later without permanently affecting the image itself. Apple has used HEIF as the default image format for photos taken by the iPhone camera app since 2017, taking advantage of hardware acceleration for HEVC encoding to shrink file sizes. HEIF support on other platforms came gradually, with native support included by default in Windows 11 and Android 12. Canon, Nikon, Sony, and other camera makers now include HEIF support in many digital cameras to save HDR photos. NOTE: The HEIF format uses multiple file extensions, based on the platform and codecs used. A .HEIF file may use any codec, including JPEG, while .HEIC files specifically use HEVC. Updated September 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/heif Copy

Hertz

Hertz Hertz (abbreviated: Hz) is the standard unit of measurement used for measuring frequency. Since frequency is measured in cycles per second, one hertz equals one cycle per second. Hertz is used commonly used to measure wave frequencies, such as sound waves, light waves, and radio waves. For example, the average human ear can detect sound waves between 20 and 20,000 Hz. Sound waves close to 20 Hz have a low pitch and are called "bass" frequencies. Sound waves above 5,000 Hz have a high pitch and are called "treble" frequencies. While hertz can be used to measure wave frequencies, it is also used to measure the speed of computer processors. For example, each CPU is rated at a specific clock speed. This number indicates how many instruction cycles the processor can perform each second. Since modern processors can perform millions or even billions of instructions per second, clock speeds are typically measured in megahertz or gigahertz. Abbeviation: Hz Updated March 3, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hertz Copy

Heuristic

Heuristic Generally speaking, a heuristic is a "rule of thumb," or a good guide to follow when making decisions. In computer science, a heuristic has a similar meaning, but refers specifically to algorithms. When programming software, computer programmers aim to create the most efficient algorithms to accomplish various tasks. These may include simple processes like sorting numbers or complex functions such as processing images or video clips. Since these functions often accept a wide range of input, one algorithm may perform well in certain cases, while not very well in others. For example, the GIF image compression algorithm performs well on small images with few colors, but not as well as JPEG compression on large images with many colors. If you knew you were only going to be dealing with small images that didn't have a wide range of colors, GIF compression would be all you need. You wouldn't have to worry about large, colorful images, so there would be no point in optimizing the algorithm for those images. Similarly, computer programmers often use algorithms that work well for most situations, even though they may perform inefficiently for uncommon situations. Therefore, a heuristic process may include running tests and getting results by trial and error. As more sample data is tested, it becomes easier to create an efficient algorithm to process similar types of data. As stated previously, these algorithms are not always perfect, but work well most of the time. The goal of heuristics is to develop a simple process that generates accurate results in an acceptable amount of time. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/heuristic Copy

Hexadecimal

Hexadecimal Hexadecimal is a base-16 number system used to represent binary data. Unlike the base-10 (or decimal) system that uses 10 symbols to represent numbers, hexadecimal digits use 16 symbols — the numerals "0"-"9" to represent values 0-9, and the letters "A"-"F" to represent values 10-15. In a base-10 system, you count in multiples of 10. After each group of 10, you increase the next-higher digit: for example, counting "8-9-10-11-12" increments the second digit (the tens column) from 0 to 1 and starts the first digit (the ones column) back at 0; counting "98-99-100-101-102" increments both the second and third (the hundreds column) digits. In a base-16 system, you increment the next-higher digit every 16 values instead: for example, counting "8-9-A-B-C-D-E-F-10-11-12" does not increment the second digit until after F (the 15th value) instead of 9. The numbers 0-255 represented as hexadecimal numbers Computers use hexadecimal values to represent binary data in a shorter, human-readable format. Instead of a long string of 0s and 1s, hexadecimal values are compact and legible. Each hexadecimal digit represents one possible combination of 4 bits, also known as a nybble; for example, the hexadecimal digit B represents the specific series of bits 1011. Since a byte is 8 bits, a combination of two hexadecimal digits can represent any possible byte. Hexadecimal digits are also case-insensitive — bb74 and BB74 are the same value. Hexadecimal numbers can also represent large numbers using fewer characters than decimal numbers. For example, two hexadecimal digits can represent any value between 0 and 255, while six hexadecimal digits can represent any value between 0 and 16,777,215. For example, the following values all represent the same number: Decimal: 47,988 Hex: BB74 Binary: 1011 1011 0111 0100 Using Hexadecimals Since many hexadecimal numbers may be misinterpreted as either decimal numbers or words, they often include a special notation to indicate that they are hexadecimal numbers. For example, Unix and the C programming language (as well as descendants like C++ and Java) use the prefix 0x before a hexadecimal number in source code; for example, 0xBB74. Unicode characters represented by hexadecimal numbers use the prefix U+; for example, the letter "T" in Unicode is U+0054. Web designers and graphic designers often use hexadecimal values to represent RGB colors using hexadecimal color codes. These codes start with a # symbol, followed by six hexadecimal digits — two digits representing a value between 0 and 255 for each of the red, green, and blue channels. For example, #800080 represents the color purple by mixing red and blue in equal amounts (using the hex digit 80, representing the value 128) and no green (using the hex digit 00, representing the value 0). Six hexadecimal digits can represent more than 16 million possible colors. Updated May 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hexadecimal Copy

HFS

HFS Stands for "Hierarchal File System." HFS was the file system used by Classic Mac OS to store files on floppy disks and hard disks. HFS was a hierarchal file system that organized files in a tree-like structure, creating a root directory on each disk and organizing files into subfolders of that root directory. It was used by all Macintosh computers between 1985 and 1998 when an updated replacement, called HFS+, was introduced. HFS included several features that made it stand out among file systems of the time. First, it supported case-insensitive filenames up to 31 characters. Unlike the previous Macintosh file system (which was designed specifically for 400 KB floppy disks), HFS supported 800 KB floppies and hard disks of arbitrary sizes. It supported metadata for each file, including the ability for each file to specify an associated default application (which allowed for two files of the same type to open in different programs automatically). Finally, each file had both a data fork and a resource fork. The data fork contained the file's contents, like the text in a document or the executable code in an application, while the resource fork contained separate resources like a file's icon or an embedded image. With the release of Mac OS 8.1 in 1998, Apple introduced a successor file system called HFS+ (or HFS Extended) that addressed several shortcomings of the original HFS. It supported larger disk volumes, increased the maximum file size, and allowed for filenames up to 255 characters. It can break files up into smaller clusters, making it more efficient at allocating disk space. Each volume could contain more individual files (more than 4 million files per disk, up from 65,535). It also added support for journaling, which helps protect the file system from corruption due to system crashes. HFS+ was the default file system on all Macs until it was succeeded by APFS, following the release of macOS 10.3 in 2017. Updated September 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hfs Copy

Hibernate

Hibernate Hibernate is a power-saving mode for desktop and laptop computers that serves as a middle ground between sleep mode and a complete shutdown. Like sleeping, hibernating preserves the contents of the system memory, such as open files and active programs. Unlike sleeping, hibernating turns the computer's power off entirely instead of maintaining a low-power state. Hibernating helps a laptop maintain its battery charge while it sits unused and disconnected from power for an extended time. When you select "Hibernate" (typically found in the Power menu), your computer saves the contents of the system RAM to a file on its startup disk. This hibernation file preserves the entire state of the computer at that time, including all open files and applications, and even the position of open windows on the desktop. Then, the computer shuts down and stops drawing power. Laptops stop using battery charge, and desktop computers can be safely unplugged. When the computer starts back up, it loads the contents of the hibernation file and restores previously open applications exactly as they were. The Hibernate option in the Windows 11 Power menu Windows, macOS, and Linux all include some form of hibernation. If enabled in Windows, the Hibernate option appears in the Shut Down menu alongside Sleep, Shut Down, and Restart. If you do not see "Hibernate," you can enable the option in the Control Panel as long as your computer's hardware supports it. macOS and Linux support a special type of hibernation called "Safe Sleep" in macOS and "hybrid-sleep" in Linux. This mode writes the contents of a computer's memory to disk each time it sleeps, while still supplying power to the RAM. If the computer remains powered during sleep, it can wake instantly by loading the saved state from RAM. If it loses power at any point while sleeping, the saved state can be restored from the disk instead. Updated September 21, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hibernate Copy

HiDPI

HiDPI HiDPI is an adjective that describes a display with high pixel density or DPI. A HiDPI screen can display sharper text and more detailed images than a standard DPI display. While there is no official DPI threshold for a HiDPI display, a typical HiDPI monitor has a DPI of at least 200. Most HiDPI monitors have twice the resolution of a comparable "standard DPI" display, which is ideal for scaling bitmap images. For example, a typical 27" monitor may have a resolution of 2560x1440 pixels (109 DPI). A HiDPI 27" monitor may have a resolution of 5120x2880, which is twice the pixel density (218 DPI). A 27" iMac with a 5K display has a 5120x2880 resolution. Apple calls this a "Retina display," which is Apple's trademarked term for a HiDPI display. HiDPI is related to resolution but also factors in screen size. For example, a small smartphone screen with an HD resolution of 1920x1080 may be considered HiDPI, while a 24" display with a 1920x1080 is not HiDPI. Computer monitors and laptop screens must have a much higher resolution than smartphones to be considered HiDPI. Android HiDPI Standards Nearly all smartphones have high pixel density, so most mobile displays are HiDPI. To help developers target different DPIs, Google has established the following Android pixel density standards: ldpi - low-density (~120dpi) mdpi - medium-density or "baseline dpi" (~160dpi) hdpi - high-density (~240dpi) xhdpi - extra-high-density (~320dpi) xxhdpi - extra-extra-high-density (~480dpi) xxxhdpi - extra-extra-extra-high-density (~640dpi) Updated June 10, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hidpi Copy

High Sierra

High Sierra High Sierra is the common name of macOS 10.13. It is the 14th version of Apple's desktop operating system, and follows Sierra, macOS 10.12. Apple released macOS High Sierra on September 25, 2017. The similarity of the name "High Sierra" to "Sierra" is not coincidental, as it was designed to be a performance update rather than major feature update. Previous performance updates include Snow Leopard (which followed Leopard), Mountain Lion (which followed Lion), and El Capitan (which followed Yosemite). The most notable performance improvement in High Sierra is the introduction of a new file system, called "Apple File System" or APFS. The new file system is designed to operate more efficiently on SSDs and other flash memory drives. It improves overall responsiveness and speeds up common tasks, such as copying files and calculating the size of a folder. It also includes native built-in encryption to protect data from unauthorized access. Finally, AFS makes system backups faster and more reliable. High Sierra natively supports the High Efficiency Video Coding (HEVC) codec, which provides as good or better quality than previous codecs, while reducing file size by up to 40%. It also supports Apple's Metal 2 API, which takes advantage of modern GPUs and improves virtual reality rendering. Apps updated in High Sierra include Photos, which adds photo organization options and provides new editing capabilities, Mail, which includes support for compressed messages, and Notes, which supports tables. Additionally, High Sierra makes it easier to share individual iCloud files and allows family members to share a single iCloud storage plan. Updated September 28, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/high_sierra Copy

High-Level Language

High-Level Language A high-level language is a programming language designed to simplify computer programming. It is "high-level" since it is several steps removed from the actual code run on a computer's processor. High-level source code contains easy-to-read syntax that is later converted into a low-level language, which can be recognized and run by a specific CPU. Most common programming languages are considered high-level languages. Examples include: C++ C# Cobol Fortran Java JavaScript Objective C Pascal Perl PHP Python Swift Each of these languages use different syntax. Some are designed for writing desktop software programs, while others are best-suited for web development. But they all are considered high-level since they must be processed by a compiler or interpreter before the code is executed. Source code written in languages like C++ and C# must be compiled into machine code in order to run. The compilation process converts the human-readble syntax of the high-level language into low-level code for a specific processor. Source code written in scripting languages like Perl and PHP can be run through an interpreter, which converts the high-level code into a low-level language on-the-fly. Updated May 12, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/high-level_language Copy

Hit

Hit A hit is a metric used in website analytics. While "hits" and "visits" are sometimes used interchangeably, they are two different things. A visit is recorded when a user visits a webpage. A hit is recorded for each resource that is downloaded from a web server. Therefore, it is common for hits to outnumber visits, often by a ratio of more than 10 to 1. Most webpages reference multiple files, such as images, CSS documents, and JavaScript files. Each local file referenced in the HTML counts as a hit, along with the webpage itself. Therefore, if a page includes eight images, references two CSS documents, and links to three JavaScript files, it would record 14 hits each time a visitor views the page. Because website hits can vary depending on how many resources each page contains, the metric is rarely used to measure website traffic. Instead, most webmasters track other metrics, such as visitors, unique visitors, and page views. NOTE: The term "hits" can also refer to the number of search engine results for a specific query. For example, if you perform a Google search that generates 300,000 results, you could say the query produced 300,000 hits. Updated April 16, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hit Copy

HMD

HMD Stands for Head-Mounted Display. An HMD is a visual device used for virtual reality, augmented reality, or mixed reality. It can be as small as a pair of glasses or as large as a helmet with an integrated display. Virtual Reality HMDs Head-mounted displays designed for virtual reality (a.k.a. VR headsets) are typically larger than their AR/MR counterparts. Most have adjustable straps or a helmet to keep the headset in place. VR headsets are enclosed, meaning the view is completely black when the screen is turned off. When you turn the headset on, it provides an immersive 3D environment that responds to your movements. Examples include the Meta Quest and the PlayStation VR headsets. Meta Quest 2 VR Headset Augmented & Mixed-Reality HMDs Head-mounted displays designed for augmented and mixed reality are see-through headsets that overlay digital images in a real-world environment. For example, an AR headset may allow you to see how different pieces of furniture would look in your living room. In the case of mixed reality, you can interact with virtual objects in a real 3D space. Examples of AR and MR HMDs include Google Glass and Microsoft Hololens. Google Glass Enterprise 2 Headset NOTE: Most AR/MR HMDs contain all the hardware needed to operate and do not require additional hardware, such as a PC. The Meta Quest 2 VR headset functions without a computer but can also work with a PC to load additional content. The PlayStation VR headset is a PlayStation accessory and requires a PlayStation console. Updated April 13, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hmd Copy

Home Page

Home Page A home page is a webpage that serves as the starting point of website. It is the default webpage that loads when you visit a web address that only contains a domain name. For example, visiting https://techterms.com will display the Tech Terms home page. The home page is located in the root directory of a website. Most web server allow the home page to have one of several different filenames. Examples include index.html, index.htm, index.shtml, index.php, default.html, and home.html. The default filename of a website's home page can be customized on both Apache and IIS servers. Since the home page file is loaded automatically from the root directory, the home page URL does not need need to include the filename. There is no standard home page layout, but most home pages include a navigation bar that provides links to different sections within the website. Other common elements found on a home page include a search bar, information about the website, and recent news or updates. Some websites include information that changes every day. For example, the Tech Terms home page includes a daily quiz and tech term of the day. Updated August 1, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/home_page Copy

Honeypot

Honeypot A honeypot is a security system designed to detect and counteract unauthorized access or use of a computer system. The name "honeypot" is used in reference to the way the system traps unauthorized users, such as hackers or spammers so they can be identified and prevented from causing further problems. Honeypots are different than typical security solutions because they intentionally lure in hackers or users with malicious intent. For example, a company may purposely create a security hole in their network that hackers could exploit to gain access to a computer system. The system might contain fake data that would be of interest to hackers. By gaining access to the data, the hacker might reveal identifying information, such as an IP address, geographical location, computer platform, and other data. This information can be used to increase security against the hacker and similar users. Another example of a honeypot is an email honeypot designed to counteract spammers. It may be configured as a fake email address that is intentionally added to known spam lists. The email address can be used to track the servers and relays that send spam to the honeypot account. This information can be used to blacklist certain IP addresses and domain names in anti-spam databases. An email honeypot can even be used as a counterattack tool, which forwards spam to the email addresses of identified spammers. While honeypots are an effective way to monitor and protect information systems, they can also be expensive to maintain. Therefore, honeypots are used primarily by large companies and organizations, rather than small businesses. Government and educational institutions may also deploy research honeypots as a means of tracking unauthorized access attempts and improving security solutions. Updated September 17, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/honeypot Copy

Horizontal Market Software

Horizontal Market Software A horizontal market is one that supplies goods to a variety of industries instead of just one. Therefore, horizontal market software is software that can be used by several different types of industries. For example, word processing and spreadsheet programs are horizontal market applications because they can be used by many types of businesses and consumers. A business owner might use word processing software to type memos to his employees, while a student might use the same program to write a paper for a class. Software developers that create programs for horizontal markets typically have a broad audience, but also face high levels of competition. The opposite of horizontal market software is vertical market software, which is only developed for a specific industry. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/horizontalmarketsoftware Copy

Host

Host A host is a computer that is accessible over a network. It can be a client, server, or any other type of computer. Each host has a unique identifier called a hostname that allows other computers to access it. Depending on the network protocol, a computer's hostname may be a domain name, IP address, or simply a unique text string. For example, the hostname of a computer on a local network might be Tech-Terms.local, while an Internet hostname might be techterms.com. A host can access its own data over a network protocol using the hostname "localhost." Host vs Server The terms host and server are often used interchangeably, but they are two different things. All servers are hosts, but not all hosts are servers. To avoid confusion, servers are often defined as a specific type of host, such as a web host or mail host. For instance, a mail host and mail server may refer to the same thing. While a server refers to a specific machine, a host may also refer to an organization that provides a service over the Internet. For example, a web host (or web hosting company) maintains multiple web servers and provides web hosting services for clients. A file host may provide online storage using multiple file servers. In other words, a hosting company hosts multiple servers that serve data to clients. Updated May 6, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/host Copy

Hostname

Hostname A hostname is a label that identifies a hardware device, or host, on a network. Hostnames are used in both local networks (LANs) as well as wide area networks like the Internet. Local network hostnames don't have a standard format, but they cannot contain spaces and other special characters like commas, periods, and apostrophes. For example, if your name your Mac, "Mike's iMac", the hostname will automatically be translated to "Mikes-iMac.local". If you're a Mac user, you can customize your computer's hostname by modifying the computer name in the Sharing System Preference. If you use a Windows PC, you can modify your hostname by changing the computer name within the System Control Panel. Internet hostnames are simply domain names, like techterms.com. They are typically fully qualified domain names, such as www.techterms.com or ftp.techterms.com, since each subdomain is a different hostname. However, if a website URL does not include a domain prefix like "www," then the domain name itself (e.g. "techterms.com") can serve as the hostname. In both local networks and wide area networks like the Internet, hostnames are translated to IP addresses using DNS. Either a local or remote DNS server maps each hostname to a unique IP address, which is how the device is actually identified on a network. Therefore, hostnames are simply used as human-readable labels, which makes it easy to remember names of devices on a network and websites on the Internet. Updated November 12, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hostname Copy

Hot Swappable

Hot Swappable In electronics terminology, the word "hot" is often used to mean "active" or "powered on." Therefore, a hot swappable device is a peripheral or component that can be removed or added while a computer is running. Replacing a device while a computer is powered on is called "hot swapping." Most early I/O devices were not hot swappable. This is because early ports, such as parallel ports and SCSI ports did not support connecting or removing devices while the computer was powered on. In fact, removing devices from these older ports while the computer was running could cause damage to them or the computer. Since it was hassle for users to turn off their computers each time they needed to connect or reconnect a device, newer I/O ports were designed to be hot swappable. Modern ports that support hot swapping include USB, FireWire, and Thunderbolt. Most modern PCs include only hot swappable ports. Therefore, the term is now used most often to describe internal components that can be replaced while the computer is running. A common example is a server hard drive. Since servers need to run constantly to avoid downtime, they typically support hot swappable hard drives. Rack-based servers often provide easy access to one or more hard drives on the front face of the rack unit. This allows server administrators to quickly add more storage or replace old hard drives without powering down the server. NOTE: Except for rare exceptions, RAM is generally not hot swappable. Updated April 19, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hot_swappable Copy

Hotfix

Hotfix A hotfix is a software update designed to fix a bug or security hole in a program. Unlike typical version updates, hotfixes are urgently developed and released as soon as possible to limit the effects of the software issue. They are often released between incremental version updates. You may receive a hotfix notification by email or as an alert in the program itself. It may be labeled as a "critical update" or "security update." Some applications allow you to update the software by simply clicking Update in the program. Other applications may require you to download the hotfix package and run the update as an executable file. It is typically advisable to run a hotfix update as soon as possible to avoid problems with the software. However, any time you receive a hotfix notification, make sure it is legitimate and from the developer of the software before agreeing to install it. In most cases, you can check the software company's website to view the update history and release notes for the program. NOTE: Since hotfixes are designed to fix a specific issue, the size of a hotfix is typically small. For this reason, a hotfix may also be called a patch, since it "patches" the bug or security hole. Updated December 30, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hotfix Copy

Hover

Hover When you roll the cursor over a link on a Web page, it is often referred to as "hovering" over the link. This is somewhat like when your boss hovers over you at work, but not nearly as uncomfortable. In most cases, the cursor will change from a pointer to a small hand when it is hovering over a link. Web developers can also use cascading style sheets (CSS) to modify the color and style of link when a user hovers over it. For example, the link may become underlined or change color while the cursor is hovering over it. The term hovering implies your computer screen is a three-dimensional space. In this conception, your cursor moves around on a layer above the text and images. When you click the mouse button while the cursor is hovering over a link, it presses down on the link to activate it. Hovering can also be used in a more general sense such as moving the cursor over icons, windows, or other objects on the screen. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hover Copy

HSB

HSB Stands for "Hue, Saturation, Brightness." HSB is a color model that defines a color using three numeric values. It specifies a point on the color wheel and modifies the saturation and brightness to define the final color. Many graphic design programs allow users to switch between RGB and HSB color pickers based on their needs. HSB is also known as HSV, or "Hue, Saturation, Value." The HSB color model uses three attributes to specify a color: Hue represents the angular position on a color wheel as a number between 0º and 359º. Red is at the top of the wheel at 0º, followed by yellow at 60º, green at 120º, cyan at 180º, blue at 240º, and magenta at 300º. Saturation represents how much of that hue is present in the final color as a percentage from 0 to 100%. A saturation of 100% results in the most intense color. A saturation of 0% will result in white, black, or a shade of grey, depending on the Brightness attribute. Brightness (or Value) represents the brightness of the color as a percentage from 0 to 100%. A brightness of 0% is black, while a brightness of 100% produces a pure color (or white, if the Saturation is 0%). The HSB color model can produce the same range of colors as the RGB model, just representing them with different formatting. For example, a light green color using the RGB values R:141 G:195 B:126 (or the hexadecimal value of #8DC37E) is the same as one using the HSB values of a hue of 107º, a saturation of 35%, and a brightness of 76%. A digital image still stores that color using the RGB values. Graphic designers often use HSB color sliders because they are more intuitive when making slight color adjustments. For example, to adjust the saturation of a color, a designer can switch to an HSB color picker and adjust the saturation slider once instead of adjusting three separate RGB sliders. Most graphic design and image editing programs show both RGB and HSB values together in their color pickers, allowing a designer to make adjustments using whichever color model works best for the task. NOTE: The HSB color model is similar to the HSL model, which uses the same calculation for hue but different calculations for saturation and lightness. Updated December 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hsb Copy

HSF

HSF Stands for "Heat Sink and Fan." Nearly all computers have heat sinks, which help keep the CPU cool and prevent it from overheating. But sometimes the heat sink itself can become too hot. This can happen if the CPU is running at full capacity for an extended period of time or if the air surrounding the computer is simply too hot. Therefore, a fan is often used in combination with the heat sink to keep both the CPU and heat sink at an acceptable temperature. This combination is creatively called a "heat sink and fan," or HSF. The fan moves cool air across the heat sink, pushing hot air away from the computer. Each CPU has a thermometer built in that keeps track of the processor's temperature. If the temperature becomes to hot, the fan or fans near the CPU may speed up to help cool the processor and heat sink. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hsf Copy

HSL

HSL Stands for "Hue, Saturation, Lightness." HSL is a color model that defines colors as sets of three attributes, similar to the RGB color model. It begins by specifying a point on the color wheel, then adjusts saturation and lightness to create the final color. Graphic and web designers often use it since it provides an easier way to make small color adjustments without having to use a color picker. Instead of representing a color as a mix of other colors, like in the RGB color model, HSL uses the three attributes of hue, saturation, and lightness to define a color: Hue represents the angular position on the color wheel as a number between 0 and 359º. Pure red is at the top of the wheel, represented by an angular value of 0º; cyan is at the opposite end at 180º. Saturation represents the intensity of the color as a percentage between 0 and 100%. A saturation of 0% results in grey. Lightness represents the lightness of the color as a percentage between 0 and 100%. A lightness of 0% is black, a lightness of 50% is the color itself, and a lightness of 100% is white. The HSL color model can produce the same range of colors as RGB, only with different formatting. For example, a light cyan color with the RGB values R:166, G:242, B:242 (or a hexadecimal value #A6F2F2) is the same as one with a hue of 180º, a saturation of 75%, and a lightness of 80%. Digital image files still save color information as RGB values, even when an image editor uses HSL. HSL is considered a more intuitive method for defining colors as a set of numbers. Web designers often specify colors in CSS using HSL instead of RGB to simplify slight adjustments. For example, if a web designer has specified a color using the CSS code hsl(180, 75%, 80%), they can make it darker by reducing the brightness number. If they were to attempt the same adjustment using RGB values, they would need to change all three values by different amounts. NOTE: The HSL color model is similar to the HSB model. While HSB uses the same calculation for hue, it uses brightness instead of lightness — where a value of 100 represents full color instead of white — which also affects the calculation for saturation. Updated December 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hsl Copy

HTML

HTML Stands for "Hypertext Markup Language." HTML is the language used to create webpages. "Hypertext" refers to the hyperlinks that an HTML page may contain. "Markup language" refers to the way tags are used to define the page layout and elements within the page. Below is an example of HTML used to define a basic webpage with a title and a single paragraph of text. TechTerms.com This is an example of a paragraph in HTML. The first line defines what type of contents the document contains. "" means the page is written in HTML5. Properly formatted HTML pages should include , , and tags, which are all included in the example above. The page title, metadata, and links to referenced files are placed between the tags. The actual contents of the page go between the tags. The web has gone through many changes over the past few decades, but HTML has always been the fundamental language used to develop webpages. Interestingly, while websites have become more advanced and interactive, HTML has actually gotten simpler. If you compare the source of an HTML5 page with a similar page written in HTML 4.01 or XHTML 1.0, the HTML5 page would probably contain less code. This is because modern HTML relies on cascading style sheets or JavaScript to format nearly all the elements within a page. NOTE: Many dynamic websites generate webpages on-the-fly, using a server-side scripting language like PHP or ASP. However, even dynamic pages must be formatted using HTML. Therefore, scripting languages often generate the HTML that is sent to your web browser. Updated May 23, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/html Copy

HTML5

HTML5 HTML5 is the fifth major standard of HTML. Development of the standard began in 2007 and HTML5 websites started becoming mainstream in 2010. The final HTML5 standard was officially standardized by W3C on October 28, 2014. The previous HTML standard, HTML 4.01, was standardized in 1999 – fifteen years before the HTML5 standard was published. However, in the decade preceding HTML5, most websites were written in XHTML, a more strict version of HTML published in 2000. HTML5 was designed to supersede both HTML 4 and XHTML by providing web developers with a simpler standard that includes several new features for the modern web. The table below includes a list of new elements, or tags, introduced in HTML5 that are used to define the structure of a document. TagDescription Defines the webpage header Defines the page footer Defines the navigation bar Defines the main content of a webpage Defines an article within a page Defines a section of a document or article Defines content outside a page's primary content These tags simplify the webpage's source as well as the corresponding CSS styles. For example, in order to define a navigation element in XHTML, you would typically write " in the page's HTML and define a class called ".nav" in CSS. In HTML5, you can simply insert the tag in the HTML and style the element itself using CSS. HTML5 includes several other new tags as well. Examples include and for graphics and and for multimedia elements. These tags provide new capabilities for web developers, though it is important to note that HTML5 still relies heavily on CSS and JavaScript for page styling, animations, and user interaction. Therefore, most interactive HTML5 websites are built using a combination of HTML5, CSS3, and JavaScript or jQuery. NOTE: Previous HTML standards included a space between the "HTML" and the number (i.e., HTML 1.0, HTML 4.01). HTML 5.0 does away with the space and is officially written as HTML5. Updated June 11, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/html5 Copy

HTTP

HTTP Stands for "Hypertext Transfer Protocol." HTTP is the protocol used to transfer data over the web. It is part of the Internet protocol suite and defines commands and services used for transmitting webpage data. HTTP uses a server-client model. A client, for example, may be a home computer, laptop, or mobile device. The HTTP server is typically a web host running web server software, such as Apache or IIS. When you access a website, your browser sends a request to the corresponding web server and it responds with an HTTP status code. If the URL is valid and the connection is granted, the server will send your browser the webpage and related files. Some common HTTP status codes include: 200 - successful request (the webpage exists) 301 - moved permanently (often forwarded to a new URL) 401 - unauthorized request (authorization required) 403 - forbidden (access is not allowed to the page or directory) 500 - internal server error (often caused by an incorrect server configuration) HTTP also defines commands such as GET and POST, which are used to handle form submissions on websites. The CONNECT command is used to facilitate a secure connection that is encrypted using SSL. Encrypted HTTP connections take place over HTTPS, an extension of HTTP designed for secure data transmissions. NOTE: URLs that begin with "http://" are accessed over the standard hypertext transfer protocol and use port 80 by default. URLs that start with "https://" are accessed over a secure HTTPS connection and often use port 443. Updated May 28, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/http Copy

HTTPS

HTTPS Stands for "HyperText Transport Protocol Secure." HTTPS is a secure version of the HTTP protocol that encrypts web traffic. It adds encryption and authentication using TLS (formerly SSL) to protect data exchanged between a web server and the client's web browser. While it was initially only used by sites to protect login credentials and financial information from eavesdropping, it is now used by most web servers to encrypt all traffic and verify that every page reaches the browser without being modified or corrupted during transit. All HTTPS web traffic uses TLS protocol for encryption, including every HTML page, cookie, image, and JavaScript file. To use HTTPS, the web server first needs a digital certificate that establishes the identity and authenticity of the site's owner. The web server and the client's web browser conduct a brief handshake where the browser verifies the server's certificate, and they work together to generate a temporary session encryption key. This key encrypts and decrypts all traffic between the server and the client, protecting it from any eavesdropping or man-in-the-middle attacks. A padlock icon in a browser's address bar indicates that the current webpage is using HTTPS You can identify whether a site uses HTTPS by looking at your browser's address bar. Addresses that start with https:// are secure. If your browser hides the beginning of URLs, it will instead show a padlock icon before the rest of the address. You can click this icon to view the site's security certificate and verify the person or business responsible for it. You should always check that a website is secure before entering login credentials or your credit card information by looking for the padlock; in fact, most browsers now actively label sites that don't use HTTPS as "Not Secure" to indicate that traffic is not encrypted. Updated July 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/https Copy

Hub

Hub There are two primary types of hubs in the computing world: 1) network hubs and 2) USB hubs. 1. Network hub A network hub is a device that allows multiple computers to communicate with each other over a network. It has several Ethernet ports that are used to connect two or more network devices together. Each computer or device connected to the hub can communicate with any other device connected to one of the hub's Ethernet ports. Hubs are similar to switches, but are not as "smart." While switches send incoming data to a specific port, hubs broadcast all incoming data to all active ports. For example, if five devices are connected to an 8-port hub, all data received by the hub is relayed to the five active ports. While this ensures the data gets to the right port, it also leads to inefficient use of the network bandwidth. For this reason, switches are much more commonly used than hubs. 2. USB hub A USB hub is a device that allows multiple peripherals to connect through a single USB port. It is designed to increase the number of USB devices you can connect to a computer. For example, if your computer has two USB ports, but you want to connect five USB devices, you can connect a 4-port USB hub to one of the ports. The hub will create four ports out of one, giving you five total ports. The USB interface allows you to daisy chain USB hubs together and connect up to 127 devices to a single computer. Some USB hubs include a power supply, while others do not. If you're connecting basic devices like a mouse, keyboard, and USB flash drive, an unpowered or "passive" USB hub should work fine. However, some peripherals, like external hard drives and backlit keyboards, require additional electrical power. In order for these types of devices to function through a USB hub, you may need use a powered or "active" hub that provides 5 volts of power to connected devices. Updated June 27, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hub Copy

HWB

HWB Stands for "Hue, Whiteness, Blackness." HWB is a color model that defines colors using sets of three attributes. It allows graphic and web designers to create a color by specifying a point on the color wheel, and then adding white or black in varying amounts to adjust the color's brightness and saturation. Like the HSB and HSL models, it provides a more intuitive way to adjust colors than modifying individual RGB values. The HWB model uses three attributes to define each color: Hue represents the angular position of the base color on the color wheel, using a number between 0° and 359°. Pure red is at the top of the color wheel at 0°, followed by yellow at 60°, then green, cyan, blue, and magenta before it circles back to red. Whiteness represents how much white is mixed into the pure hue from the color wheel, as a percentage from 0 to 100%. A whiteness of 0% results in a fully saturated color, while a whiteness of 100% fully desaturates the color and appears white. Blackness represents how much black is mixed into the pure hue, also as a percentage. Like whiteness, a blackness of 0% shows a fully saturated color, while a blackness of 100% appears black. Mixing the three HWB attributes works much like mixing paints together. You start with a pure base color from any point on the color wheel, then adjust its brightness by mixing in white or black. Adding both white and black together desaturates the color and brings it closer to grey. If the combined amount of white and black exceeds 100%, the values are normalized to add up to 100% and produce a shade of grey using the same ratio of white and black. For example, an HWB color with white and black values of 50%, 75%, and 100% all produce the same shade of grey. Web designers can specify colors using HWB within CSS. They can make simple adjustments to HWB numbers to change the base color or brighten, darken, or desaturate colors on their websites. For example, if a web designer starts with a color using the CSS code hwb(180, 25%, 0%), they can make it lighter by increasing the W attribute, darken it by decreasing W, or desaturate it by increasing the B attribute. HWB was first added as a color method to the CSS specification in 2014, but most major web browsers did not fully support it until 2022. NOTE: The possible range of colors using HWB is the same as with RGB, only with a different set of attributes. Digital images still include color information saved using RGB values, and a color value can be easily converted to RGB (or hexadecimal) through software. Updated November 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hwb Copy

Hyper-Threading

Hyper-Threading Hyper-threading is a processor feature that allows a single processor core to execute multiple process threads simultaneously. A CPU that supports hyper-threading splits each of its physical processor cores into two virtual cores, which increases the efficiency of a computer by letting it run more demanding applications. Hyper-threading was developed by Intel and used in many of their processors; AMD uses a similar technology in their processors, called Simultaneous Multithreading. Each processor core in a CPU includes a set of execution units that perform a separate computational function, like multiplication or addition. Without hyper-threading, each one can only run a single process thread at a time, and depending on which calculations are required, many of those execution units will sit idle. Hyper-threading allows two separate threads to run together by pairing two that need different computational tasks. Running threads in parallel this way is not as fast as using two physical cores, but is still more efficient than running one thread at a time. While hyper-threading can help a computer run more efficiently, it requires the support of the computer's operating system and other software. First, the operating system assigns each physical processor core two virtual core addresses. It then assigns threads in a way that splits calculations between the virtual cores and maximizes efficiency. For a program to benefit from hyper-threading, it must be written in a way that splits its tasks into separate threads that can run concurrently. If a program only has one process thread at a time, it cannot send a second thread to the other virtual core. However, even if a computer is running a single-threaded application, it can run process threads from other applications at the same time, which helps a computer perform faster when multitasking. NOTE: Hyper-threading is more technically complex and requires more power than an equivalent non-hyper-threaded processor. Intel generally reserves hyper-threading for its higher-end processors. Updated July 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hyperthreading Copy

Hyperautomation

Hyperautomation Hyperautomation combines artificial intelligence (AI), machine learning (ML), and robotic process automation to automate routine business tasks. It allows computer systems to perform simple, repetitive tasks that would otherwise take up a lot of time, freeing up human employees to give them more time for higher-value work. Conventional automation tools perform relatively simple tasks, strictly following the rules and structures programmed for them. If any of these automated tasks need to change, someone has to program those changes into the system. Hyperautomation software tools combine these simple automation tasks with artificial intelligence and machine learning, helping systems spot trends and adjust their processes without human intervention. For example, ordinary automation could send marketing emails on a schedule to a subset of a customer list based on their past purchases. Adding AI and ML to that automation would allow the system to learn, identifying new purchasing trends and adapting the list of recipients and the products featured without further human intervention. Hyperautomation requires careful planning. Businesses looking to use hyperautomation need to make sure that the new automation will be useful, that they'll fit into established workflows, and that the tools used work together smoothly. By identifying tasks well suited to automation, businesses and workers can save time and focus on more important work. Updated January 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hyperautomation Copy

Hyperlink

Hyperlink A hyperlink is a bit of text, an image, or a button in a hypertext document that you can click. A hyperlink may include a link to another document or to another part of the current page. Hyperlinks are found on virtually every webpage to help navigate readers to other pages and websites. Other types of files may also include hyperlinks. PDF files and Word documents often include hyperlinks to other parts of the document or an outside website. Presentation files may use hyperlinks to help a presenter to jump to a specific slide. eBooks include hyperlinks for readers to quickly jump to the start of a chapter from the table of contents, view an endnote by tapping its reference, or open another book by tapping a link. Clicking a hyperlink to open another webpage The default formatting for a hyperlink on a webpage or in a document is blue, underlined text. However, web designers often customize the appearance of links to match the overall theme of a webpage or document, so you cannot always expect a hyperlink to keep its default look. Hyperlinks that use images instead of text do not receive any special formatting. In HTML, a hyperlink uses the tag. The tag includes the URL of the link's destination, as well as other optional attributes — whether the link is a file for download, how to include referrer information, or if the link should open in the same window or a new one. The tag surrounds the anchor text, which is the text that actually appears on the page and can be clicked. For example, here is the HTML code for a hyperlink to this website. TechTerms Updated May 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hyperlink Copy

Hypermedia

Hypermedia Most Web navigation is done by clicking text-based links that open new pages in a Web browser. These links, which are often blue and underlined, are referred to as hypertext, since they allow the user to jump from page to page. Hypermedia is an extension of hypertext that allows images, movies, and Flash animations to be linked to other content. The most common type of hypermedia is an image link. Photos or graphics on the Web are often linked to other pages. For example, clicking a small "thumbnail" image may open a larger version of the picture in a new window. Clicking a promotional graphic may direct you to an advertiser's website. Flash animations and videos can also be turned into hyperlinks by embedding one or more links that appear during playback. You can tell if an image or video is a hyperlink by moving the cursor over it. If the cursor changes into a small hand, that means the image or video is linked to another page. Clicking the text, image, or video will open up a new location in your Web browser. Therefore, you should only click a hypertext or hypermedia link when you are ready to leave the current page. If you want to open the link in a new window, you can usually right click the link and select "Open Link in New Window." Updated January 19, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hypermedia Copy

Hypertext

Hypertext Hypertext is a kind of specially-formatted text that provides a link to other content. Hypertext allows system designers to organize information in a branching structure instead of a linear one. Clicking a hypertext link (called a hyperlink) will send the user to another part of the current document or a separate document entirely. Hypertext links are a fundamental building block of the Internet, allowing users to navigate in a web browser from page to page and from site to site. The use of hypertext predates the Internet, dating back to the 1960s. It allows documents in a system to cross-reference each other and provide quick access to information. Dictionary and encyclopedia software use hypertext to link from one article or definition to another. Apple's HyperCard software used hypertext (as well as hyperlinks added to images) to allow users to create and navigate a flat-file database, which people used to create everything from simple databases to presentations to interactive choose-your-own-adventure games. Hypertext served as the basis for the original information-sharing system that became the World Wide Web, and its usefulness helped spur the growth of the web on the Internet. Hypertext is used on virtually every single web page on every single website. Hypertext links are also still used in many types of documents, helping users navigate long text documents and spreadsheets. NOTE: Similarly-formatted images and videos that provide links to other content are commonly called "hypermedia." Updated December 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/hypertext Copy

Hypervisor

Hypervisor A hypervisor is a software program that manages one or more virtual machines (VMs). It is used to create, start, stop, and reset VMs. The hypervisor allows each VM or "guest" to access the physical hardware, such as the CPU, RAM, and storage. It can also limit how many system resources each VM can use so that multiple VMs can run simultaneously on a single system. There are two primary types of hypervisors: native and hosted. Type-1: Native A native or "bare metal" hypervisor runs directly on the hardware. It sits between the hardware and one or more guest operating systems. A native hypervisor loads even before the OS and interacts directly with the kernel. This provides the highest possible performance since there is no primary operating system competing for the computer's resources. However, it also means the computer may only be used to run virtual machines since the hypervisor is always running. Examples of Type-1 hypervisors include VMware ESXi, Microsoft Hyper-V, and Apple Boot Camp. Type-2: Hosted A hosted hypervisor is installed on a host computer, which already has an operating system installed. It runs as an application like other software on the computer. Most hosted hypervisors can manage and run multiple VMs at one time. The advantage of a hosted hypervisor is that it can be open or quit as needed, freeing up resources for the host computer. However, since it runs on top of an operating system, it may not offer the same performance as a native hypervisor. Examples of Type-2 hypervisors include VMware Workstation, Oracle VirtualBox, and Parallels Desktop for Mac. Generally, hosted hypervisors are more common for personal and small business use, while native hypervisors are used for enterprise applications and cloud computing. Updated September 29, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/hypervisor Copy

I/O

I/O Stands for "Input / Output." I/O refers to a computer system receiving input and returning output, and is one of the fundamental aspects of computing. When you type a letter on a keyboard or click a mouse button, you provide input; when the computer displays something on its monitor, that's output. The term can refer to both I/O devices and I/O operations. I/O devices include the peripherals that allow you to interact with a computer, as well as those that enable two computer systems to exchange data. Input devices allow you to give the computer instructions or input data — for example, a keyboard, mouse, microphone, or touchpad. Output devices show data to you, like monitors, speakers, and printers. Some devices handle both input and output, either to the user (like a multifunction printer/scanner or gaming controller with haptic feedback) or to other computer systems (like a network interface or storage device). The ports on a computer that you plug devices into are known as I/O ports. USB ports, display interfaces like HDMI and DisplayPort, audio ports, and Ethernet ports are the most common ones found on modern computers. A computer's CPU manages its I/O operations. Data travels to the CPU from a device controller by a data bus. The CPU sends the data it receives along the right bus to its destination. That destination may be a user-facing output device like the computer's monitor, its memory or storage devices, or a network interface to another system. Updated February 23, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/io Copy

I/O Address

I/O Address An I/O address is a numeric identifier that a computer's CPU uses to communicate with a peripheral. Every I/O port on a computer, like a USB port, Ethernet jack, and HDMI port, has a unique address that differentiates it from every other port. Separate I/O addresses help the CPU know which peripheral is which and interact with them directly. When a program running on a computer's CPU needs to communicate directly with a peripheral, it sends commands and data to that peripheral's I/O address. The peripheral can respond by sending information back to the computer's CPU at that same address. For example, when you print a document, the CPU forwards the document and print settings to the printer's I/O address, where the printer can then process that request and print the document. The Windows Device Manager displaying a computer's I/O addresses Early computers often mapped I/O ports using fixed addresses based on the type of I/O port — for example, the standard address for a computer's first serial port was always 0x03F8. These addresses were separate from system memory and required unique CPU instructions to send data to and from I/O devices. Modern computers now use a more efficient method called memory-mapped I/O that assigns addresses to I/O ports from the same address space as the main system memory. Sharing that address space allows any CPU instruction that accesses system memory to also access I/O devices. Since the operating system now manages I/O addresses, applications can use system APIs to interact with peripherals instead of sending data to specific, pre-defined addresses — drastically increasing reliability and reducing the frequency of conflicts. Updated July 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/io_address Copy

IaaS

IaaS Stands for "Infrastructure as a Service." Infrastructure as a Service is a type of cloud computing service that provides servers, data storage, virtual machines, and network connections as an on-demand service. Customers do not need to run their own data center or computer hardware; the IaaS provider handles all of the hardware maintenance and network connectivity, allowing the customer to focus entirely on the software they want to run. IaaS is similar to other "as a Service" cloud computing products that offer managed server hosting services. IaaS provides only the managed hardware and network connection, leaving all software decisions and responsibilities to the customer. Other cloud products, like Platform as a Service (PaaS) and Software as a Service (SaaS), manage both the hardware infrastructure and the software platforms running on the hardware. IaaS customers handle their software installation, configuration, and updates. Allowing that much flexibility for software makes these services popular for software testing and development, web app and website servers, and high-powered computing and data analysis. Like other cloud services, IaaS offers more flexibility than a company running its own data center. While a large enterprise can afford to build a data center, IaaS products are significantly cheaper for small and medium businesses to start using. Cloud services are scalable, allowing the customer to add computing or storage capacity when needed and then scale back later. Cloud data centers can also be placed around the world, offering redundancy and traffic balancing for the client. Updated November 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/iaas Copy

IANA

IANA Stands for "Internet Assigned Numbers Authority." The IANA is a standards authority that operates in affiliation with ICANN. They are responsible for coordinating several globally-unique numbering systems that keep the Internet going. Its primary roles are allocating blocks of IP addresses, managing the root zone DNS entries, and maintaining a registry of standard Internet protocols. Without a central authority charged with distributing IP addresses, computer networks wouldn't know how to route Internet traffic. However, with millions of possible IP addresses available, it's not feasible for a single entity to distribute them all individually. Instead, the IANA allocates blocks of IP addresses to Regional Internet Registries (RIRs). The RIRs then distribute addresses to National Internet Registries (NIRs) and ISPs; they, in turn, assign them to individual customers. In addition to assigning blocks of IP addresses, the IANA is responsible for managing the upper-most parts of the DNS hierarchy, known as the root zone. It maintains a central registry of top-level domains (like ".com," ".biz," or ".uk") and delegates the administration of each one to a registrar. These registrars can then sell and assign domain names using those TLDs. For example, the ".com" TLD is delegated to Verisign for them to manage, while ".org" is delegated to the non-profit Public Interest Registry. The IANA's third major responsibility is maintaining a registry of standard Internet protocols. HTML status codes, port numbers, language abbreviations, and MIME types are all standards that software, operating systems, and web servers need to agree on for the Internet to work. For example, the SSH protocol's use of port 22 is part of the Service Name and Transport Protocol Port Number Registry maintained by the IANA. By referencing a central database of standard protocols, software and websites can use standard protocols to work together seamlessly. Updated January 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/iana Copy

IBM Compatible

IBM Compatible The personal computer market in the early 1980's consisted primarily of Apple and IBM computers. Apple's systems ran a proprietary operating system developed by Apple, while IBM machines primarily ran PC-DOS. As the demand for personal computers began to grow, IBM decided to license the DOS operating system to other manufacturers. These companies began producing personal computers that were called PC clones or IBM compatibles. As several other manufacturers began producing PCs, supplies grew and costs began to drop. This enabled more people to afford PCs and sales of IBM compatibles began to dominate the personal computer market. It wasn't long until the new manufacturers' PC sales surpassed the number of computers sold directly by IBM. The Apple Macintosh also gained substantial market share when it was introduced in 1984, but the low cost and wide availability of IBM compatibles kept their sales strong. Sales of IBM compatibles surged again in 1995, when Microsoft introduced the Windows 95 operating system. However, by that time, the term "IBM compatible" had become almost irrelevant, since most PCs used Microsoft Windows as the primary operating system. Also, PC manufacturers had been building their own computers for many years, and there were few similarities between IBM's own PCs and IBM compatibles. In 2005, IBM stopped manufacturing personal computers. The company that started the PC revolution is no longer in the market. Therefore, the term "IBM compatible" is a bit outdated, though it can still be used to describe Windows-based computers. The term "PC" is more appropriate, albeit a bit ambiguous, since Macs are technically PCs too. Therefore, the term "Windows computer" seems to be the best way to describe a modern day IBM compatible. Updated May 3, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ibmcompatible Copy

ICANN

ICANN Stands for "Internet Corporation for Assigned Names and Numbers." ICANN is a non-profit organization that coordinates and manages the Internet's domain name system (DNS) and the allocation of IP addresses. It was formed in 1998 and is registered and headquartered in California, operating as a partnership with representatives around the world. By ensuring that domain names and IP addresses are unique identifiers without duplication, ICANN helps keep the Internet stable and globally accessible. ICANN performs several important functions that are required to keep the Internet running: Allocate IP Addresses: ICANN is the top authority for the distribution of IP addresses. Each IP address on the Internet must be unique — if two separate web servers had the same address, other computers would not know which server they were communicating with. Since there are more than 4 billion unique IPv4 addresses, ICANN cannot allocate each one individually; instead, they distribute blocks of addresses to regional Internet registries (RIRs), who then allocate smaller blocks to governments, organizations, and Internet service providers (ISPs). Manage the Domain Name System: ICANN delegates the management of each top-level domain (like .com and .biz) to a separate registry company. Each registry maintains a list of each domain within its TLD with its associated IP address, which other computers can use to look up a specific website's web server. Coordinate Root DNS Servers: There are a total of 13 root DNS servers, each run by a separate operator. ICANN works with all of them to coordinate their operations so that DNS information is always current. Develop Policy: ICANN works with other Internet technology stakeholders (including governments, businesses, and technical experts) to develop clear and fair policies for managing IP addresses, domain names, and other name-and-number resources. It is important to note that ICANN has no control over content on the Internet or any particular website. Instead, their focus is solely on the management of Internet resources and the coordination of critical functions necessary to keep the Internet accessible. NOTE: More information about ICANN's mission can be found at icann.org. Updated December 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/icann Copy

ICCID

ICCID Stands for "Integrated Circuit Card Identifier." An ICCID is a unique number assigned to a SIM card used in a mobile phone or another cellular device. It provides a standard way to identify each mobile device connected to a cellular network. ICCID Format An ICCID is 19 or 20 digits long and contains individual sections of numbers, including a header, ID number, and check digit, calculated using a checksum. The header is 6 or 7 digits long and includes the following information: Major industry identifier (MMI) - the first two digits — always 89 for SIM cards Country code (CC) - one to three digits — represents USA, England, Japan, etc. Issuer Identifier Number (IIN) - two to three digits — represents Verizon, AT&T, T-Mobile, etc. ICCID vs IMEI Both the ICCID and IMEI (International Mobile Equipment Identity) are unique IDs for mobile devices. However, the IMEI is attached to the hardware while the ICCID is assigned to a removable SIM card. Therefore a smartphone will always have the same IMEI, but the ICCID will change if the SIM card is replaced. Cellular providers activate devices based on their ICCID rather than their IMEI. Since each mobile account is linked to an ICCID, it is possible to use the same device on multiple networks by merely swapping out the SIM card. Updated September 24, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iccid Copy

ICF

ICF Stands for "Internet Connection Firewall." ICF is a Windows XP feature that protects computers connected to the Internet from unauthorized access. When ICF is enabled, Windows keeps a log of incoming requests from other systems on the Internet. If the request is something the user has requested, like a Web page, the transmission will not be affected. However, if the request is unsolicited and is not recognized by the system, the transmission will be dropped. This helps prevent intrusion by hackers or malicious software such as spyware. While ICF limits incoming traffic from the Internet, it does not affect outgoing traffic. This means data sent from your computer is still vulnerable to viruses or other disruptions even when ICF is enabled. If you have multiple computers sharing the same Internet connection via ICS, you can enable ICF for all the computers. However, you should enable ICF for the router or system connected directly to the Internet connection, not for each individual system. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/icf Copy

iCloud

iCloud iCloud is an online service provided by Apple. It provides an email account, online storage, and backup services. It also allows you to share data between devices, such as Macs, iPhones, and iPads. Below is a list of features included with iCloud: Mail – an email account that can be automatically configured on Apple devices and accessed via Apple's webmail interface; iCloud email addresses end with mac.com, me.com, or icloud.com. Contacts – your address book is stored in the cloud and can be synced across your all your devices. Calendar – you can store one or more calendars online and sync the events across your all your devices. Photos – Apple's Photo Stream automatically stores your recently captured photos in iCloud. You can also use iCloud Photo Library to store all your images on iCloud and sync them across your devices. iCloud Drive – an online storage area where you can save and download all types of files. Notes – you can jot down notes on one device and view them on another one. Reminders – when you set a reminder it will show up on all devices connected to your iCloud account. Find My iPhone – a service that allows you to locate all your Apple devices, such as your iPhone, iPad, or MacBook. Several Apple applications support iCloud, meaning you can save a document from one device and open and edit it on another one. For example, you can create a presentation using Apple Keynote on an iMac, save it in iCloud, then open it on your iPad at a later time. The primary apps that support iCloud include Pages (word processing documents), Numbers (spreadsheets), and Keynote (presentations). iCloud includes web-based versions of these applications as well. Apple's iCloud service is free to use and can be accessed using an Apple ID on any supported device. If you need additional online storage, you can purchase extra storage space for a monthly fee. NOTE: iCloud is an evolution of Apple's cloud computing services. Previous versions include iTools (2000), .Mac (2002), and MobileMe (2008). iCloud, which replaced MobileMe, was launched in 2011. Updated September 16, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/icloud Copy

ICMP

ICMP Stands for "Internet Control Message Protocol." ICMP is an error-reporting protocol that helps diagnose problems with a TCP/IP network connection. It is a network-level protocol that assists in the movement of data packets between networks, and it is an important part of the Internet Protocol (IP) suite. Network devices like routers and switches send ICMP error messages when a data packet cannot reach its destination. Transferring data between two computers over the Internet requires a series of separate network connections along the way. Each data packet moves from network to network, forwarded to the next step by each network's router. A series of packets sent between the same two computers may even take different routes and arrive out of order before being reassembled at their destination. If any device cannot successfully forward a packet to its next hop, it sends an ICMP error message back to the packet's source. The Ping utility uses ICMP messages to check whether a remote server is reachable and how long a round trip takes The ICMP protocol specifies several error types that help identify a connectivity problem. For example, a Time Exceeded error means that the data packet's TTL expired before it reached its destination, while a Parameter Problem error indicates a problem with the data packet's IP header. Some error types include a subtype code that provides more detail. The Destination Unreachable error, for example, has 16 possible subtypes that each describe a different cause. In most cases, software on the origin computer resolves these errors automatically by sending the data packet again or trying a different route, often without the user noticing. However, diagnostic tools like Ping and Traceroute use ICMP messages to help you diagnose connectivity problems between your computer and a remote server. Updated September 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/icmp Copy

Icon

Icon An icon is a small graphical representation of a file, folder, or application. Icons provide a way to interact with something on your computer by first selecting its icon and then issuing a command — deleting a file, moving one to another folder, or double-clicking one to open it. Icons are a fundamental part of computers with graphical user interfaces (GUIs). The first popular, mass-market computer to use icons to represent files, folders, and applications was the Apple Macintosh in 1984. Older computers relied on a command line interface to issue commands; if you wanted to move a file, you had to type the move command, the file name, and the destination folder. The combination of icons and the mouse allowed users to interact with files directly — to move a file from the desktop into a folder, all you had to do was click the icon representing the file, then drag it onto the icon representing the destination folder. File and folder icons as they appear in macOS Icons appear all over the user interface for most operating systems. Applications appear represented by their icons on the macOS dock, the Windows taskbar, and the home screens of iOS and Android devices. Smaller icons in the Windows system tray and the macOS menu bar provide quick access to utility applications. Many file icons even show a small preview of the file's contents — some text file icons show a thumbnail of the text, while some image file icons show a small thumbnail of the image itself. Updated February 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/icon Copy

ICQ

ICQ ICQ is a popular online messaging program. It is similar to instant messaging programs like AIM and Yahoo! Messenger, but allows users to enter chat rooms and chat with multiple users at one time. Therefore, ICQ (which is short for "I seek you") is more of a community-oriented chat program than other messaging programs. Since the program was released in 1996, ICQ has gone through several major revisions. It has evolved from a basic messaging program to a communications tool that allows users to interact in many different ways. For example, ICQ now supports voice, video, and VoIP communications and can be used to send text messages to mobile phones via SMS. Programs like ICQ2Go, the ICQ Toolbar, and ICQmail, have all been developed as additional tools for ICQ users. ICQ remains community-oriented and offers several types of chat rooms, categorized by age, location, lifestyles, beliefs, and other classifications. Users can join chat rooms using the ICQ software or via the Web interface at ICQ.com. Updated July 1, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/icq Copy

ICS

ICS Stands for "Internet Connection Sharing." ICS is a software service that allows one computer to share a direct Internet connection with other computers over a local network. It allows the host computer to act as a gateway, directing traffic between nearby computers and the Internet. Windows, macOS, and Unix-based operating systems all support some form of ICS. Since the Internet connection only provides a single IP address, a computer hosting a shared Internet connection uses DHCP and NAT to forward data packets between the Internet and the computers on the local network. The host computer must have a direct connection to the Internet via a modem or another gateway device (like an ONT). It must also have another network interface it can use to share that connection. For example, if the host computer has both Ethernet and Wi-Fi adapters, it can connect to a cable or DSL modem via Ethernet and share that connection over Wi-Fi. Once the right connections are in place, you can enable ICS on the adapter with the Internet connection to share it through the other adapter. A Windows computer sharing its Internet connection Windows 98 was the first home operating system that supported sharing an Internet connection. It was most often used to share one computer's dial-up Internet connection with other computers in the home. However, it is now more common for a home Internet connection to connect directly to a network's router, which shares it with the rest of the home network. Updated May 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ics Copy

ICT

ICT Stands for "Information and Communication Technologies." ICT refers to technologies that provide access to information through telecommunications. It is similar to Information Technology (IT), but focuses primarily on communication technologies. This includes the Internet, wireless networks, cell phones, and other communication mediums. In the past few decades, information and communication technologies have provided society with a vast array of new communication capabilities. For example, people can communicate in real-time with others in different countries using technologies such as instant messaging, voice over IP (VoIP), and video-conferencing. Social networking websites like Facebook allow users from all over the world to remain in contact and communicate on a regular basis. Modern information and communication technologies have created a "global village," in which people can communicate with others across the world as if they were living next door. For this reason, ICT is often studied in the context of how modern communication technologies affect society. Updated January 4, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ict Copy

IDE

IDE IDE stands for both "Integrated Device Electronics" and "Integrated Development Environment." The first is a hardware term, while the second is related to software programming. 1. Integrated Device Electronics IDE was the most widely-used type of hard drive from the mid 1990s to the late 2000s. The "integrated" aspect of the name describes how the controller is integrated into the drive itself. IDE and ATA are often used synonymously since they both refer to the same type of hard drive. However, ATA describes the interface while IDE describes the actual hard drive. The first IDE standard (ATA-1) was released in 1994 and supported data transfer rates of 8.3 Mbps in DMA mode. Enhanced IDE (ATA-2) was standardized in 1996 and supported data transfer rates up to 16.7 Mbps – twice the rate of the original standard. The next several IDE standards were labeled using ATA versions (up to ATA-7), maxing out at 133 Mbps. The IDE interface was eventually superseded by SATA, an even faster interface. 2. Integrated Development Environment A software IDE is an application that developers use to create computer programs. In this case, "integrated" refers to the way multiple development tools are combined into a single program. For example, a typical IDE includes a source code editor, debugger, and compiler. Most IDEs also provide a project interface that allows programmers to keep track of all files related to a project. Many support version control as well. Some IDEs provide a runtime environment (RTE) for testing software programs. When a program is run within the RTE, the developer can track each event that takes place within the application being tested. This can be useful for finding and fixing bugs and locating the source of memory leaks. Because IDEs provide a centralized user interface for writing code and testing programs, a programmer can make a quick change, recompile the program, and run the program again. Programming is still hard work, but IDE software helps streamline the development process. Updated July 8, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ide Copy

IDS

IDS Stands for "Intrusion Detection System." An IDS monitors network traffic for suspicious activity. It may be comprised of hardware, software, or a combination of the two. IDSes are similar to firewalls, but are designed to monitor traffic that has entered a network, rather than preventing access to a network entirely. This allows IDSes to detect attacks that originate from within a network. An intruder detection systems can be configured for either a network or a specific device. A network intrusion detection system (NIDS) monitors inbound and outbound traffic, as well as data transfers between systems within a network. NIDSes are often spread out across several different points in a network to make sure there a no loopholes where traffic may be unmonitored. An IDS configured for a single device is called a host intrusion detection system, or HIDS. It monitors a single host for abnormal traffic patterns. For example, it may look for known viruses or malware in both inbound and outbound traffic. Some HIDSes even check packets for important system files to make sure they are not modified or deleted. Both network and host IDSes are designed to detect intrusions and sound an alert. This alert may be sent to a network administrator or may be processed by an automated system. A system that automatically handles intrusion alerts is called a reactive IDS or an intrusion prevention system (IPS). Updated May 9, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ids Copy

IEEE

IEEE Stands for the "Institute of Electrical and Electronics Engineers" and is produced "I triple E." The IEEE is a professional association that develops, defines, and reviews electronics and computer science standards. Its mission is "to foster technological innovation and excellence for the benefit of humanity." The history of the IEEE dates back to the 1800s when electricity started to influence society. In 1884, the AIEE (American Institute of Electrical Engineers) was formed to support electrical innovation. In 1912, the IRE (Institute of Radio Engineers) was created to develop wireless telegraphy standards. In 1963, the two organizations merged to become a single entity – the IEEE. Since then, the organization has established thousands of standards for consumer electronics, computers, and telecommunications. While the Institute of Electrical and Electronics Engineers is based in the United States, IEEE standards often become international standards. Below are a few examples of technologies standardized by the IEEE. IEEE 1284 (Parallel Port) – an I/O interface used by early desktop PCs IEEE 1394 (Firewire) – a high-speed interface designed for external hard drives, digital video cameras, and other A/V peripherals IEEE 802.11 (Wi-Fi) – a series of Wi-Fi standards used for wireless networking IEEE 802.16 (WiMAX) – a wireless communications standard for transferring cellular data The IEEE provides a wide range of membership options for individuals, groups, and corporations. IEEE knowledge groups, which are communities driven by members and volunteers, help individuals grow in their knowledge about various topics. Members can also contribute ideas and feedback to help the IEEE draft and establish new technical standards. Updated September 10, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ieee Copy

Iframe

Iframe An iframe (short for inline frame) is an HTML element that allows an external webpage to be embedded in an HTML document. Unlike traditional frames, which were used to create the structure of a webpage, iframes can be inserted anywhere within a webpage layout. An iframe can be inserted into an HTML document using the iframe tag, as shown in the example below: The code above would insert the contents of the URL into a 728 x 90 px inline frame within the webpage. The iframe source (src) can reference an external website or another page on the same server, such as src="/example.php". The width and height attributes are not required, but are commonly used to define the size of the iframe. Other iframe attributes, such as marginwidth and marginheight are supported in HTML 4 and earlier, but in HTML5, CSS is used to customize the appearance of an iframe. Iframes are used for several different purposes, such as online advertising and multimedia. Many ad platforms use iframes to display ads on webpages since they provide more flexibility than an inline script. Since iframes may contain an entire webpage, advertisers can include extra tracking code within an iframe that helps ensure accurate reporting for both the advertiser and publisher. Iframes are also used for displaying different types of media within a webpage. For example, YouTube videos and Google Maps windows are often embedded in webpages using iframes. Many web applications use iframes since they can display dynamic content without reloading the webpage. Updated April 15, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iframe Copy

IGP

IGP Stands for "Integrated Graphics Processor." An IGP is a graphics chip that is integrated into a computer's motherboard. The IGP serves the same purpose as a video card, which is to process the graphics displayed on the computer. Integrated graphics processors take the graphics portion of the processing load off the main CPU. However, because IGPs are soldered onto the motherboard, their size is limited and they cannot use a dedicated fan to cool them, like some video cards do. For this reason, IGPs typically do not have the same performance as video cards, which may be attached to the computer's PCI or AGP slots. Because integrated graphics processors cannot be removed, they also cannot be upgraded like video cards can. However, because of their small size, IGPs are a good solution for laptop computers and entry-level desktop PCs. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/igp Copy

IIS

IIS Stands for "Internet Information Services." IIS is a web server software package designed for Windows Server. It is used for hosting websites and other content on the Web. Microsoft’s Internet Information Services provides a graphical user interface (GUI) for managing websites and the associated users. It provides a visual means of creating, configuring, and publishing sites on the web. The IIS Manager tool allows web administrators to modify website options, such as default pages, error pages, logging settings, security settings, and performance optimizations. IIS can serve both standard HTML webpages and dynamic webpages, such as ASP.NET applications and PHP pages. When a visitor accesses a page on a static website, IIS simply sends the HTML and associated images to the user’s browser. When a page on a dynamic website is accessed, IIS runs any applications and processes any scripts contained in the page, then sends the resulting data to the user’s browser. While IIS includes all the features necessary to host a website, it also supports extensions (or “modules”) that add extra functionality to the server. For example, the WinCache Extension enables PHP scripts to run faster by caching PHP processes. The URL Rewrite module allows webmasters to publish pages with friendly URLs that are easier for visitors to type and remember. A streaming extension can be installed to provide streaming media to website visitors. IIS is a popular option for commercial websites, since it offers many advanced features and is supported by Microsoft. However, it also requires requires a commercial license and the pricing increases depending on the number of users. Therefore, Apache HTTP Server, which is open source and free for unlimited users, remains the most popular web server software. Updated December 11, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iis Copy

Illegal Operation

Illegal Operation An "Illegal Operation" error occurs when a computer program encounters a critical issue that prevents it from continuing to run. This error message was common on older systems like Windows 95 or 98, though it still appears in some instances today. Common causes of illegal operations include: Divide-by-Zero Errors: When a program tries to perform an impossible mathematical operation Memory Access Violations: When software attempts to use or modify memory it doesn't have permission to access When an illegal operation occurs, the program typically crashes or closes abruptly. Modern operating systems like Windows, macOS, and Linux now handle such errors more gracefully, isolating the faulty application and preventing it from affecting the rest of the system. System-level crashes are rare today due to improved memory management and error containment mechanisms. Most programs now avoid the phrase "Illegal Operation" since it implies the user did something wrong. In reality, these errors usually occur because of software bugs or hardware problems. Modern error messages may include phrases like "application fault" or "runtime error" instead. Updated November 30, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/illegal_operation Copy

IM

IM Stands for "Instant Message." Instant messaging is a form of real-time text chat between two or more people over the Internet. It is similar to SMS text messaging but operates over any internet connection instead of relying strictly on a mobile phone network. IM conversations occur in real-time, allowing users of the same messaging service to engage in conversation by typing out and exchanging messages. Users can also engage in multiple conversations at once in separate threads. Instant messaging has caught on due to its convenience and speed, letting people communicate faster than email and less intrusively than a voice or video call. Many instant messaging platforms are available, but most are incompatible with each other and can only send messages to their own users. Instant messaging platforms include additional features beyond simply sending messages back and forth — many allow you to see whether a user is online and if they are currently typing a message. Many also allow you to send images and video, exchange files, or begin a quick VoIP or video call. Most provide encryption to keep conversations private, with some offering secure end-to-end encryption. Apple's iMessage service lets mobile and desktop users send instant messages Early instant messaging predates the Internet and operated over a local connection. The Talk program allowed two users signed into the same Unix server to send messages that would appear on each others' terminals. Instant messaging rapidly gained popularity in the late 1990s as services like AOL Instant Messenger and ICQ allowed people to send instant messages to their friends when they were online. Instant messaging features came to other online services like Gmail and Facebook, letting people instant message their email contacts and social media friends. Mobile-oriented instant messaging apps like WhatsApp and iMessage came to smartphones once they became popular in the 2010s, replacing SMS as the primary text messaging method. Updated June 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/im Copy

Image Scaling

Image Scaling Image scaling is the process of resizing a digital image. Scaling down an image makes it smaller while scaling up an image makes it larger. Both raster graphics and vector graphics can be scaled, but they produce different results. Raster Image Scaling A raster graphic is a bitmap image comprised of individual pixels. Examples include JPEG and PNG files. The process of scaling raster graphics is also called "resampling," in which pixels are mapped to a new grid, which may be smaller or larger than the original matrix. Scaling down a raster graphic may cause a slight decrease in image quality since the pixels are squeezed into a smaller grid. Colors must be blended and edges are often smoothed out using a process called anti-aliasing. A good image scaling algorithm or "scaler" will keep most of the image detail when scaling down an image. The further an image is scaled down, the less detail can be maintained. Scaling up a raster graphic also decreases the image quality. Generally, increasing the size of a digital image will cause it to look blurry. The larger the image is scaled, the fuzzier it will appear. Anti-aliasing can be used to smooth out the edges, but a scaled-up image will not look as clear as the original image. Vector Image Scaling Vector graphics, such as an .SVG or .AI file, are comprised of points and lines, also called paths. These paths can be scaled up or down with no loss of quality. Instead of mapping pixels to another grid, vector image scaling simply moves the points within the image. Since vector graphics are not defined by pixels, they appear sharp at both small and large sizes. For this reason, company logos and application icons are often designed as vector graphics. When published in a software application or website, the images can be scaled to a specific size and then saved as a raster graphic. NOTE: "Video Upscaling" is the process of scaling video for a higher-resolution display. The goal is to make a lower-resolution video look sharp at a higher resolution. For example, an HD video might be upscaled to 4K. A good upscaling algorithm will keep as much detail as possible at the higher resolution, but cannot make the video look as sharp as a video originally recorded in 4K. Updated August 31, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/image_scaling Copy

IMAP

IMAP Stands for "Internet Message Access Protocol." IMAP is a modern digital system for receiving email. Unlike the older POP3 protocol, IMAP provides a shared mailbox for your email messages. It syncs messages with the mail server, which is especially helpful if you check your email with multiple devices. Imagine IMAP as a mailbox where all your messages stay until you decide what to do with them. The protocol provides rules that manage how your emails are delivered, read, and organized. IMAP ensures actions like reading, labeling, or moving messages are synchronized across all devices. It allows you to access the same messages from different devices without any inconsistencies. With IMAP, you can check your email on your phone in the morning, and the messages you've read will be marked as read when you check your email on your computer later in the day. Similarly, any messages you move or delete will sync to the same folders or trash on all your devices. The seamless synchronization across devices is why modern email clients recommend IMAP rather than POP3 when setting up a new account. NOTE: Since IMAP stores emails on the server, effective email management is essential. You should regularly organize and clear your inbox to prevent overloading the server's storage capacity. Updated April 24, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/imap Copy

IMEI

IMEI Stands for "International Mobile Equipment Identity." Every mobile device that connects to a cellular network has a unique IMEI number. This includes cell phones, smartphones, cellular-enabled tablets and smartwatches, and other devices that support cellular data. The IMEI number uniquely identifies a mobile device. It is separate from the SIM card or UICC number which is linked to a removable card in the device. This means the IMEI can be used to track or locate a specific device, regardless of what card is installed in it. For example, if your phone is stolen and the SIM card is swapped out, your mobile provider can still block access from the stolen phone using the IMEI. Difference Between the IMEI and Serial Number An IMEI is similar to a serial number since it is a unique number linked to a hardware device. While serial numbers are generated by the manufacturer, IMEI numbers are produced by the GSMA (Groupe Spéciale Mobile Association), an international organization that oversees mobile network operators. The GSMA provides each mobile device manufacturer with a range of numbers to use for IMEIs on devices they produce. Manufacturers can choose their own convention for serial numbers, including the length and type of characters (numeric or alphanumeric). IMEI numbers are always 15 digits long, comprised of a 14-digit unique number followed by a "check digit" (or a checksum), which validates the number. A variation of the IMEI, called the IMEISV (IMEI Software Version), includes the 14-digit number plus two digits for the version of the device's software. The IMEISV may change after a software update. NOTE: Most smartphones allow you to view your device's IMEI in the "About" section of the device settings. Another way to view the IMEI is to press *#06# on the numeric keypad. Updated July 20, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/imei Copy

Impact Printer

Impact Printer An impact printer is a type of printer that operates by striking a metal or plastic head against an ink ribbon. The ink ribbon is pressed against the paper, marking the page with the appropriate character, dot, line, or symbol. Common examples of impact printers include dot matrix, daisy-wheel printers, and ball printers. Dot matrix printers work by striking a grid of pins against a ribbon. Different characters are printed by using different pin combinations. Daisy-wheel printers use a circular wheel with "petals" that each have a different character or symbol on the end. In order to print each character, the wheel spins to the appropriate petal and a hammer strikes the petal against the ribbon and the page. Similarly, ball printers use a spherical ball with raised characters on the outside. The ball spins to each character before printing it on the page. While impact printers still have some uses (such as printing carbon copies), most printers are now non-impact printers. These printers, such as laser and inkjet printers are much quieter than impact printers and can print more detailed images. Updated June 22, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/impactprinter Copy

Import

Import Import is a command typically located within a program's File menu (File → Import...). Like the standard File → Open... command, Import is used for opening files, but it serves a more specific purpose. Instead of opening standard file types, Import is often used for importing parts of files, program settings, plug-ins, or other unconventional file formats. Not all applications include an Import option, since the "Open..." command is often sufficient. Instead, the Import command is only found in programs that require a means of importing unique types of data. For example, Adobe Photoshop includes an Import command that allows users to import frames from a video file as layers in an image document. Intuit Quicken includes an Import feature that allows users to import data from .QIF documents or download information directly from the Web. Most Web browsers also include an Import option that allows users to import saved bookmarks and other settings. While the Import command can serve a number of different purposes, its main function is to import file types that cannot be opened with the standard "Open..." command. Therefore, if you want to import unconventional types of data into an application, the File → Import... command may be your best bet. Many programs also include an Export command, which is used for saving data in a specific format. Updated September 2, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/import Copy

Impression

Impression Impressions track how many times a webpage or element on a webpage is viewed. It is one of the standard metrics used in website analytics software. The term "impressions" most often refers to page impressions, which is synonymous with page views. Each time a page is viewed, a page impression is counted. Therefore, a single visitor can rack up multiple impressions on a website by visiting multiple pages. Unique impressions, or page views by different visitors, are useful for measuring the number of daily unique visitors to a website. Website analytics software may record unique impressions by either saving a 24 hour cookie on visitors' browsers or by resetting the record of unique visitors at the start of each day. Most analytics software records both total and unique impressions. These two metrics can be used to determine the average number of pages visitors view before leaving the website. While impressions typically refer to page views, they may also define how many times individual elements on a webpage are viewed. For example, in online advertising, "ad impressions" track the number of times individual advertisements are displayed. If a webpage contains three ad units, each page view will produce three ad impressions. By tracking the number of ad impressions, webmasters can monitor the performance of individual ads. This information can be used in combination with RPM, or revenue per 1,000 page views, to determine the ideal amount of ads for each webpage. Updated February 23, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/impression Copy

In-App Purchase

In-App Purchase An in-app purchase (also known as an IAP) is a purchase made within an app to unlock extra content or features. They are most common in mobile apps for iOS and Android, although developers of programs in the Windows and macOS app stores use them as well. IAPs may be a one-time transaction or a recurring subscription. Many developers use in-app purchases as a variation of the shareware model, often called "Free+". By releasing their app into app stores for free, they can encourage you to try it without committing to a purchase. The two most common methods involve presenting a limited set of features for as long as you want or offering a time-limited free trial. If you like the app, you can unlock the full version with an IAP. Some app developers use IAP subscriptions to help them fund continuing updates to an app, similar to periodically releasing a new version at a special upgrade price. Free mobile games are the most significant users of in-app purchases. They frequently use IAPs to remove time limits on gameplay, add new levels, or provide more cosmetic options for a player's avatar. So-called "pay-to-win" games can be played for free but are easier or more rewarding if you buy unlockable items. Be mindful of these games since in-game purchases often add up faster than you realize. Updated January 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/in-app_purchase Copy

Inbox

Inbox An inbox is the main folder that your incoming mail gets stored in. Whether you check your mail through a webmail interface or use a program like Outlook or Mac OS X Mail, each downloaded message gets stored in your inbox. If you check your mail from a POP3 account using an e-mail program, the messages are downloaded to the inbox on your local hard drive. However, if you use an IMAP mail server, your inbox is created on the server and therefore your messages are stored on the server as well. Because most people receive more mail than they can manage in one folder, it is common to create other folders to store your messages. After reading your messages, you may move them to other folders you have created (such as "Family," "Friends," "Business," etc.) or delete them by moving them to the Trash. However you decide to manage you mail, it is a good idea to keep the number of messages in your inbox from growing too large. Otherwise, you just might have to file for E-mail Bankruptcy. Updated September 10, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/inbox Copy

Index

Index An index is a list of data, such as group of files or database entries. It is typically saved in a plain text format that can be quickly scanned by a search algorithm. This significantly speeds up searching and sorting operations on data referenced by the index. Indexes often include information about each item in the list, such as metadata or keywords, that allows the data to be searched via the index instead of reading through each file individually. For example, a database program such as Microsoft Access may generate an index of entries in a table. When an SQL query is run on the database, the program can quickly scan the index file to see what entries match the search string. Search engines also use indexes to store a large list of Web pages. These indexes, such as those created by Google and Yahoo!, are necessary for quickly generating search results. If search engines had to scan through millions of pages each time a user submitted a search, it would take roughly forever. Fortunately, by using search indexes, Web searches can be performed in less than a second instead of several hours. The term "index" can also be used as a verb, which not surprisingly means to create an index. It may also refer to adding a new item to an existing index. For example, Mac OS X 10.4 and later indexes the hard disk to create a searchable index for Apple's Spotlight search utility. Google's "Googlebot" crawls the Web on a regular basis, adding new Web pages to the Google index. While most database and hard disk indexes are updated on-the-fly, search engine indexes are only updated every few hours, days, or even weeks. This is why newly published Web pages may not show up in search engine results. While it may be a frustration for Web developers, it is a small price to pay for the convenience of super-fast Web searches. Updated February 9, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/index Copy

Indexed Color

Indexed Color Indexed color is a method of storing and displaying digital images using a limited color palette. It reduces the memory and storage space needed to display an image by referencing entries in an image's palette (called its color lookup table) instead of specifying a full RGB value. GIF images use indexed color exclusively, and PNG and TIFF images may also use indexed colors to save storage space. Full-color digital images use 24 bits per pixel to define each pixel's color, out of more than 16 million possible values. Referencing an entry in a color lookup table instead uses less data, depending on the size of the color lookup table. Choosing one of 256 possible colors requires 8 bits; a smaller 16-color palette only requires 4 bits per pixel. Some color palettes may set one of their color entries as transparent, allowing an image to have a transparent background. Image editing programs generate a color lookup table whenever they save an image in an indexed color format. The program analyzes the image to identify the most common colors, and then creates a palette from the results. Images with a lot of subtle color shades or smooth gradients will have trouble limiting the palette to 256 (or fewer) colors. In these cases, dithering blends colors together by mixing two separate colors in patterns — when viewed by a human eye, these patterns blur together slightly to create a new color between the two. Updated March 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/indexed_color Copy

Influencer

Influencer An influencer is a person who promotes products, services, or events via social media. Common platforms include Instagram, Twitter, and YouTube. Paying influencers to promote brands is a popular means of social media marketing. The number one qualification of an influencer is a large number of followers. Small influencers may have under 100,000 followers, while the most popular influencers have several million followers. Some influencers build their follower base from scratch by posting photos, videos, or tweets that gain a lot of attention. Other influencers use their existing celebrity status to build a follower base. An influencer must also have expertise or authority in his or her fields of influence. For example, a weight-lifting champion may have the credibility to promote workout equipment and training videos. An experienced cook may be able to influence people's decisions regarding diet and nutrition. A technology expert may recommend specific hardware and software products. Influencers often sign contracts with companies, which may require that the influencer publishes a certain number of posts per week or month. The company may also require specific words, mentions, or hashtags. For example, a clothing company might pay an influencer an ongoing fee to post at least two photos of the person wearing the brand each week. Companies may also pay influencers a one-time fee for a single post or a series of posts. This is a common way to promote brand launches or local events, such as concerts. NOTE: In 2017, the Federal Trade Commission (FTC) instituted rules influencers must follow when promoting brands online. For example, Instagram posts must include wording that clearly states when they have been sponsored. Updated May 7, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/influencer Copy

Infotainment

Infotainment Infotainment, a combination of "information" and "entertainment," describes media designed to educate users in an entertaining manner. It spans a range of formats, including television shows, websites, and software applications. For example, the History Channel produces entertaining documentaries that teach viewers about different historical periods. The National Geographic website provides informative and engaging articles. Duolingo provides an entertaining method of learning different languages. History Channel Infotainment Series Infotainment apps often include dynamic visuals, interactive elements, and engaging stories to present information in a more accessible way. They often include games with rewards for completing objectives or reaching certain milestones. They combine factual content with interactive features designed to sustain interest and enjoyment. The line between pure information and infotainment can sometimes blur, as the intention to entertain is subjective. However, content that deliberately integrates storytelling, humor, or visually appealing elements into its presentation of factual material typically qualifies as infotainment. This approach has gained popularity in recent years, reflecting a growing recognition that people are more likely to engage with information when it is delivered in an enjoyable and relatable manner. Updated January 2, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/infotainment Copy

Inkjet

Inkjet Inkjet printers are the most common type of consumer printers. The inkjet technology works by spraying very fine drops of ink on a sheet of paper. These droplets are "ionized" which allows them to be directed by magnetic plates in the ink's path. As the paper is fed through the printer, the print head moves back and forth, spraying thousands of these small droplets on the page. While inkjet printers used to lack the quality and speed of laser printers, they have become almost as fast as laser printers and some can even produce higher-quality images. Even low-budget inkjet printers can now print high-resolution photos. The amazing thing is, as the quality of inkjet printers has improved, the prices have continued to drop. However, for most people, refilling the inkjet cartridges a few times will often cost more than the printer. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/inkjet Copy

Inode

Inode An inode (short for "index node") is a data structure Linux uses to store information about a file. Each inode has a unique ID that identifies an individual file or other object in the Linux file system. Inodes contain the following information: File type - file, folder, executable program etc. File size Time stamp - creation, access, modification times File permissions - read, write, execute Access control list - permissions for special users/groups File protection flags File location - directory path where the file is stored Link count - number of hardlinks to the inode Additional file metadata File pointers - addresses of the storage blocks that store the file contents Notably, an inode does not contain the filename or the actual data. When a file is created in the Linux file system, it is assigned an inode number and a filename. This linked pair allows the filename to be changed without affecting the file ID in the system. The same holds true when renaming directories, which are treated as files in Linux. File data is stored across one or more blocks on the storage device. An inode includes a pointer to this data but does not contain the actual data. Therefore, all inodes are relatively small, regardless of the size of the files they identify. Modern file systems provide a hierarchical directory structure (folders) to make it easier to organize files. However, in Linux, files are not actually stored in directories. Instead, directories only contain filenames and their associated inode IDs. The ID references an inode in the Linux inode table, which points to the storage blocks where the file data is stored. NOTE: Each Linux storage device has an inode limit, which is defined when the disk is formatted. In rare cases, when many small files are created, the inode limit may be reached before the full storage capacity has been used. Updated December 20, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/inode Copy

Input

Input Whenever you enter data into your computer, it is referred to as input. This can be text typed in a word processing document, keywords entered in a search engine's search box, or data entered into a spreadsheet. Input can be something as simple as moving the mouse or clicking the mouse button or it can be as complex as scanning a document or downloading photos from a digital camera. Devices such as the keyboard, mouse, scanner, and even a digital camera are considered input devices. This is because they allow the user to input data into the computer (yes, the word "input" can also be used as a verb). While input generally comes from humans, computers can also receive input from other sources. These include audio and video devices that record movies and sound, media discs that install software, and even the Internet, which is used to download files and receive data such as e-mail or instant messages. The opposite of input is output, which is what the computer produces based on user input. Input and output devices are collectively referred to as I/O devices. Updated December 12, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/input Copy

Input Device

Input Device An input device is any device that provides input to a computer. There are dozens of possible input devices, but the two most common ones are a keyboard and mouse. Every key you press on the keyboard and every movement or click you make with the mouse sends a specific input signal to the computer. These commands allow you to open programs, type messages, drag objects, and perform many other functions on your computer. Since the job of a computer is primarily to process input, computers are pretty useless without input devices. Just imagine how much fun you would have using your computer without a keyboard or mouse. Not very much. Therefore, input devices are a vital part of every computer system. While most computers come with a keyboard and mouse, other input devices may also be used to send information to the computer. Some examples include joysticks, MIDI keyboards, microphones, scanners, digital cameras, webcams, card readers, UPC scanners, and scientific measuring equipment. All these devices send information to the computer and therefore are categorized as input devices. Peripherals that output data from the computer are called output devices. Updated May 13, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/inputdevice Copy

Insertion Point

Insertion Point An insertion point is the location on the screen where the next character typed will be inserted. This location may be in a text document, a form field, a Web browser address bar, or anywhere else on the screen that allows text input. The insertion point is often identified by a flashing cursor. This cursor is called an "I-beam pointer" and is shaped like a capital "I" or a long vertical line. When you type a character on the keyboard, it will appear on the screen directly to the right of the flashing cursor. The insertion point continues to move to the right as each character is entered. You can typically change the current insertion point by clicking in a different location within a text field or word processing document. This allows you to insert or delete text wherever you click. You can also create a text selection by clicking and dragging the cursor over a block of text. When text is selected, the insertion point becomes the entire selection, meaning the next character typed will replace the currently selected text. Updated November 22, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/insertion_point Copy

Instagram

Instagram Instagram is an online photo sharing service. It allows you to apply different types of photo filters to your pictures with a single click, then share them with others. While is a rather basic service, Instagram's simplicity has helped it gain widespread popularity. While nearly all smartphones have built-in cameras, they often do not produce quality photos. By using Instagram, you can liven up otherwise mediocre images and make them look more professional. For example, Instagram's Valencia filter brightens photos and enhances the contrast, improving the appearance of drab photos. The Earlybird filter adds a slight blur to the image, warms the colors, and vignettes the corners, giving photos a softer look. You can also make more drastic changes to photos using a filter like "1977," which makes images look like vintage photographs taken with an old camera. The two primary ways to use Instagram are the Instagram website (Instagram.com) and the Instagram app. The website allows you to upload images, manage your photos, apply filters, and share them with your friends. The app allows you to take pictures with your iPhone or Android device and immediately apply the filter of your choice. You can share your photos directly on Instagram.com or on other social media websites like Facebook, Twitter, and Tumblr. NOTE: Instragram was acquired by Facebook in 2012 for approximately $1 billion. Updated May 1, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/instagram Copy

Install

Install To install a program is to perform all the necessary system configuration steps to make it ready to use. It often means running the program's installer, which extracts its files into the proper directories. However, you can install some apps through other automated methods like app stores and package managers, or in some cases, install them manually by moving an executable file into an Applications folder. When you install an application using its installer, it performs several steps. First, it checks whether the computer meets the application's system requirements and whether an older version of the app is present. It asks the user to agree to its terms of service and to configure some pre-installation options. Next, it creates an application folder and extracts executable files, libraries, and other resources. It configures system settings like Windows registry options and file type associations. If necessary, it activates the application with the developer's activation server. At the end of the process, it removes any temporary files, and once the installation is complete, the application is ready to run. Many applications, like Microsoft Office, use installers that automate the installation process While installers are common, they're not the only way to install software. For example, on macOS, many apps just need to be moved or copied into the Applications folder, where they'll be recognized by the operating system and added to the Launchpad. Most platforms' app stores automatically install an app right after you purchase it. System administrators can use package managers to install software packages from online repositories using a command line. Finally, some applications, known as "portable apps," don't require any installation at all; they store every resource and library within the executable file itself and don't require any configuration of the operating system. Updated October 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/install Copy

Installer

Installer When you download new software to your computer, you'll often need to run its installer first. An installer program contains a compressed and packaged version of an application in a single file to make it easier to download. When run, it unpacks it to your Program Files folder (in Windows) or Applications folder (on macOS). An installer may install an application for the first time, or update an older application to the newest version. Installers may also make changes to your system configuration settings. Many applications will associate with a particular file type, automatically opening files of that type in the new application whenever you double-click one. Installers may also tell your operating system to run the new application automatically whenever the computer boots or make other changes to how your operating system runs. Some system configuration changes may require you to reboot your computer after running an installer. Not every application needs an installer. Some applications that don't require system configuration changes, resource folders, or file associations may be opened and run no matter where they are. You may find it helpful to place these applications in your Program Files or Application folder regardless to make them easier to locate when you need them. NOTE: Many installers will create an uninstaller program that will automatically remove the application, its resource folders, and the system configuration changes if you don't want the application anymore. Updated January 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/installer Copy

Integer

Integer An integer is a whole number (not a fraction) that can be positive, negative, or zero. Therefore, the numbers 10, 0, -25, and 5,148 are all integers. Unlike floating point numbers, integers cannot have decimal places. Integers are a commonly used data type in computer programming. For example, whenever a number is being incremented, such as within a "for loop" or "while loop," an integer is used. Integers are also used to determine an item's location within an array. When two integers are added, subtracted, or multiplied, the result is also an integer. However, when one integer is divided into another, the result may be an integer or a fraction. For example, 6 divided by 3 equals 2, which is an integer, but 6 divided by 4 equals 1.5, which contains a fraction. Decimal numbers may either be rounded or truncated to produce an integer result. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/integer Copy

Integrated

Integrated Integrated comes from the Latin word "integer," which means whole. If something is integrated, it is designed to act or function as one unit. While the term has a singular meaning, it has many different applications within the IT world. It may describe a hardware component, a software program, or the combination of hardware and software. An integrated hardware component may be combined with one or more other components. Many laptops, for instance, include integrated graphics chips that are built into the motherboard. In some cases, the graphics processor is integrated directly into the CPU. Integrating hardware components can save space and reduce manufacturing costs, but it may also limit the performance and upgradability of a computer system. Integrated software often describes applications bundled with an operating system. This collection of programs, also called "system software," is designed to provide basic functionality with a consistent user experience. Both Windows and Macintosh operating systems come with several integrated programs and utilities, such as text editors, web browsers, and activity monitors. Hardware and software may also be integrated together. A well-known example is the Macintosh platform, in which the software is integrated with the hardware, since both are developed by Apple. This is different than the Windows platform, in which Microsoft develops the software, but the hardware is produced by many different manufacturers. Apple's iOS is also integrated with its iPhone and iPad devices, which is different than the open Android platform. Updated November 21, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/integrated Copy

Integrated Circuit

Integrated Circuit An integrated circuit, or IC, is small chip that can function as an amplifier, oscillator, timer, microprocessor, or even computer memory. An IC is a small wafer, usually made of silicon, that can hold anywhere from hundreds to millions of transistors, resistors, and capacitors. These extremely small electronics can perform calculations and store data using either digital or analog technology. Digital ICs use logic gates, which work only with values of ones and zeros. A low signal sent to to a component on a digital IC will result in a value of 0, while a high signal creates a value of 1. Digital ICs are the kind you will usually find in computers, networking equipment, and most consumer electronics. Analog, or linear ICs work with continuous values. This means a component on a linear IC can take a value of any kind and output another value. The term "linear" is used since the output value is a linear function of the input. For example, a component on a linear IC may multiple an incoming value by a factor of 2.5 and output the result. Linear ICs are typically used in audio and radio frequency amplification. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/integratedcircuit Copy

Intellectual Property

Intellectual Property Intellectual property refers to the ownership of intangible and non-physical goods. This includes ideas, names, designs, symbols, artwork, writings, and other creations. It also refers to digital media, such as audio and video clips that can be downloaded online. Since intellectual property is intangible, it is more difficult to protect than other types of property. For example, tangible property, such as a car, can be recovered or replaced if it is stolen. However, if intellectual property is stolen, it may be difficult to recover. Say for example, a person comes up with a great idea for a new invention. If someone else steals the idea, the potential profit of the invention may also be taken away. Similarly, if a digital recording of a new song is "leaked" on the Internet, thousands of people may download it and redistribute it to others. If this happens, the profit potential of selling the music may be substantially diminished. Because of its monetary implications, intellectual property it is often used as a legal term to safeguard the rights of creators and inventors. It has also become increasingly important to media production companies who need to protect the distribution of their digital media. By defining and establishing intellectual property rights, innovators and creators can have legal protection of their ideas and creations. This may be done by copyrighting written works, applying for patents for inventions, and trademarking brands, names, and logos. Of course, the sooner these legal steps are taken, the better. After all, it is much easier to protect an idea before it is stolen than after someone else takes it! Updated November 6, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/intellectualproperty Copy

Interactive Video

Interactive Video Interactive video (also known as "IV") is a type of digital video that supports user interaction. These videos play like regular video files, but include clickable areas, or "hotspots," that perform an action when you click on them. For example, when you click on a hotspot, the video may display information about the object you clicked on, jump to a different part of the video, or open another video file. Interactive videos are common on YouTube, a popular video sharing website. They allow you to select one or more options while the video is playing. For example, towards the end of a video, you may be asked to select which character in the video you liked best. Once you make your choice, a new video will open and may provide more information about the character you selected. Other examples of interactive videos include card tricks, choose your own adventure videos, and interactive tutorials. You can create an interactive video in YouTube by using YouTube's "Annotations" feature. This tool allows you to overlay content on top of the video and create clickable hotspots. If you add interactive content to your video, it is helpful to include a few extra seconds of "decision time" that allows the user to make his or her selection. It is also important to make sure each clickable option links to another live video file on YouTube. NOTE: While clickable advertisements are sometimes overlaid on YouTube videos and other video files, these are not considered interactive videos. Updated January 4, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/interactive_video Copy

Interface

Interface The term "interface" can refer to either a hardware connection or a user interface. It can also be used as a verb, describing how two devices connect to each other. A hardware interface is used to connect two or more electronic devices together. For example, a printer typically connects to a computer via a USB interface. Therefore, the USB port on the computer is considered the hardware interface. The printer itself also has a USB interface, which is where the other end of the USB cable connects. Several common peripherals connect to a computer via USB, while other devices use a Firewire connection or other interface. Ethernet connections are commonly used for networking, which is why most cable modems and routers have an Ethernet interface. Many other electronic devices besides computers use some type of interface to connect to other equipment. For example, a TV may connect to a Blu-ray player via an HDMI cable and may connect to a cable box using component cables. Audio devices may have either analog or digital audio connections and may include a MIDI interface, which is used to transfer MIDI data. iPods have a proprietary "dock connector" interface, which allows them to connect to a power source and transfer data via USB. Since there are many different types of electronic devices, there are also a lot of hardware interfaces. Fortunately, standards like USB, Firewire, HDMI, and MIDI have helped consolidate the number of interfaces into a manageable number. After all, it would be pretty difficult if each digital camera, printer, keyboard, and mouse used a different interface. Computers would need a lot more ports on the back! Updated March 31, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/interface Copy

Interlaced

Interlaced A common way to compress video is to interlace it. Each frame of an interlaced video signal shows every other horizontal line of the image. As the frames are projected on the screen, the video signal alternates between showing even and odd lines. When this is done fast enough, i.e. around 60 frames per second, the video image looks smooth to the human eye. Interlacing has been used for decades in analog television broadcasts that are based on the NTSC (U.S.) and PAL (Europe) formats. Because only half the image is sent with each frame, interlaced video uses roughly half the bandwidth than it would sending the entire picture. The downside of interlaced video is that fast motion may appear slightly blurred. For this reason, the DVD and HDTV standards also support progressive scan signals, which draw each line of the image consecutively. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/interlaced Copy

Internal Hard Drive

Internal Hard Drive An internal hard drive is a hard drive that resides inside the computer. Most computers come with a single internal hard drive, which includes the operating system and pre-installed applications. While laptop computers only have room for one internal hard drive, some desktop computers have multiple hard drive bays, which allows for multiple internal hard drives. Internal hard drives have two main connections – one for data and another for power. The data port may have an ATA or a SATA interface. This interface connects to the computer's hard drive controller, which enables the drive to communicate with the motherboard. The power connector attaches to an electrical cable, which provides power from the computer's main power supply. The internal hard drive is a key component of a computer since it stores the user's software and personal files. While the rest of a computer's components can be easily replaced, if a hard drive fails, the data may not be able to be recovered. Therefore, it is wise to regularly back up the data stored on an internal hard drive using an external hard drive or an online backup service. Updated May 27, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/internalharddrive Copy

Internet

Internet The Internet is a global wide area network that connects computer systems across the world. It includes several high-bandwidth data lines that comprise the Internet "backbone." These lines are connected to major Internet hubs that distribute data to other locations, such as web servers and ISPs. In order to connect to the Internet, you must have access to an Internet service provider (ISP), which acts as the middleman between you and the Internet. Most ISPs offer broadband Internet access via a cable, DSL, or fiber connection. When you connect to the Internet using a public Wi-Fi signal, the Wi-Fi router is still connected to an ISP that provides Internet access. Even cellular data towers must connect to an Internet service provider to provide connected devices with access to the Internet. The Internet provides different online services. Some examples include: Web – a collection of billions of webpages that you can view with a web browser Email – the most common method of sending and receiving messages online Social media – websites and apps that allow people to share comments, photos, and videos Online gaming – games that allow people to play with and against each other over the Internet Software updates – operating system and application updates can typically downloaded from the Internet In the early days of the Internet, most people connected to the Internet using a home computer and a dial-up modem. DSL and cable modems eventually provided users with "always-on" connections. Now mobile devices, such as tablets and smartphones, make it possible for people to be connected to the Internet at all times. The Internet of Things has turned common appliances and home systems into "smart" devices that can be monitored and controlled over the Internet. As the Internet continues to grow and evolve, you can expect it to become an even more integral part of daily life. Updated September 17, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/internet Copy

Internet of Things

Internet of Things The Internet of Things, commonly abbreviated "IoT," is an umbrella term that refers to anything connected to the Internet. It includes traditional computing devices, such as laptops, tablets, and smartphones, but also includes a growing list of other devices that have recently become Internet enabled. Examples include home appliances, automobiles, wearable electronics, security cameras, and many other things. In order for a device to be part of the Internet of Things, it must be able to communicate with other devices. Therefore, it requires some type of built-in wired or wireless communication. Most IoT devices are Wi-Fi enabled, but Bluetooth can also be used to transfer data to nearby devices. Anything that connects directly to the Internet must have a unique IP address, which is one of the reasons the adoption of IPv6 has been so important. IoT devices are commonly called "smart devices," since they are able to communicate with other things. For example, you can't control a traditional oven when you are away from home. However, a smart oven that is connected to the cloud can be accessed remotely via a web interface or an app. You can check the status of the oven and start preheating it before you get home. Other smart home devices, such as smart thermostats, light fixtures, wall outlets, and window treatments are considered part of the IoT since they can be accessed and controlled over the Internet. Along with the capacity to communicate, many IoT devices also include an array of sensors that provide useful information. For example, a wearable device may include sensors that track your heart rate and activity level. It can automatically upload your data to your personal account on the Internet. A security system might include motion detectors that send you an alert if any suspicious activity is recorded. Lighting systems can be automated using sensors that detect how dark it is outside. While the Internet of Things is still in its infancy, it provides promising opportunities for the future. For example, connecting medical devices to the Internet will make it easier for doctors to track patients' health, providing more consistent data and requiring fewer appointments. Agricultural products will be able to self-adjust based on weather forecasts, creating more efficient farming methods. Internet-connected cars will communicate with each other, providing better traffic information and paving the way for self-driving cars. In time, the Internet of Things will become less of an abstract idea and more of a way of life. Updated January 16, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/internet_of_things Copy

InterNIC

InterNIC Stands for "Internet Network Information Center." InterNIC was an organization created in 1993 to provide domain name registration and IP address allocation services. It was a joint effort by Network Solutions and AT&T, with Network Solutions providing domain name registration services and AT&T providing directory and database services. In 1998, InterNIC and the Internet Assigned Numbers Authority (IANA) were reorganized to come under the control of a new non-profit entity, the Internet Corporation For Assigned Names and Numbers (ICANN). Before the establishment of InterNIC, Network Solutions operated the first domain name service (DNS) registry under contract with the United States Department of Defense and SRI International. This partnership assigned Network Solutions the responsibility of giving out domain names for the early ".com," ".net," ".org," ".edu," ".gov," and ".mil" top-level domains. However, as the Internet became more popular worldwide among businesses, schools, and other non-military organizations, the responsibility for its administration was shifted to the private sector. In 1993, the National Science Foundation (NSF) created InterNIC in partnership with Network Solutions and AT&T to manage the DNS registry and distribute domain names to non-military organizations. This move made Network Solutions the first private domain name registrar. In 1998, the NSF reorganized InterNIC to combine it and its responsibilities with the IANA under a new organization, ICANN. After the merger, InterNIC continued to provide WHOIS services for several years before ICANN assumed those responsibilities as well. Updated January 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/internic Copy

Interpreter

Interpreter An interpreter is a program that reads and executes code. This includes source code, pre-compiled code, and scripts. Common interpreters include Perl, Python, and Ruby interpreters, which execute Perl, Python, and Ruby code respectively. Interpreters and compilers are similar, since they both recognize and process source code. However, a compiler does not execute the code like an interpreter does. Instead, a compiler simply converts the source code into machine code, which can be run directly by the operating system as an executable program. Interpreters bypass the compilation process and execute the code directly. Since interpreters read and execute code in a single step, they are useful for running scripts and other small programs. Therefore, interpreters are commonly installed on Web servers, which allows developers to run executable scripts within their webpages. These scripts can be easily edited and saved without the need to recompile the code. While interpreters offer several advantages for running small programs, interpreted languages also have some limitations. The most notable is the fact that interpreted code requires an interpreter to run. Therefore, without an interpreter, the source code serves as a plain text file rather than an executable program. Additionally, programs written for an interpreter may not be able to use built-in system functions or access hardware resources like compiled programs can. Therefore, most software applications are compiled rather than interpreted. Updated September 16, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/interpreter Copy

Interrupt

Interrupt An interrupt is a signal sent to the processor that interrupts the current process. It may be generated by a hardware device or a software program. A hardware interrupt is often created by an input device such as a mouse or keyboard. For example, if you are using a word processor and press a key, the program must process the input immediately. Typing "hello" creates five interrupt requests, which allows the program to display the letters you typed. Similarly, each time you click a mouse button or tap on a touchscreen, you send an interrupt signal to the device. An interrupt is sent to the processor as an interrupt request, or IRQ. Each input device has a unique IRQ setting, or priority. This prevents conflicts and ensures common input devices, such as keyboards and mice, are prioritized. Software interrupts are used to handle errors and exceptions that occur while a program is running. For example, if a program expects a variable to be a valid number, but the value is null, an interrupt may be generated to prevent the program from crashing. It allows the program to change course and handle the error before continuing. Similarly, an interrupt can be used to break an infinite loop, which could create a memory leak or cause a program to be unresponsive. Both hardware and software interrupts are processed by an interrupt handler, also called an interrupt service routine, or ISR. When a program receives an interrupt request, the ISR handles the event and the program resumes. Since interrupts are often as brief as a keystroke or mouse click, they are often processed in less than a millisecond. Updated March 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/interrupt Copy

Intranet

Intranet An intranet is a private network that can only be accessed by authorized users. The prefix "intra" means "internal" and therefore implies an intranet is designed for internal communications. "Inter" (as in Internet) means "between" or "among." Since there is only one Internet, the word "Internet" is capitalized. Because many intranets exist around the world, the word "intranet" is lowercase. Some intranets are limited to a specific local area network (LAN), while others can be accessed from remote locations over the Internet. Local intranets are generally the most secure since they can only be accessed from within the network. In order to access an intranet over a wide area network (WAN), you typically need to enter login credentials. Intranets serve many different purposes, but their primary objective is to facilitate internal communication. For example, a business may create an intranet to allow employees to securely share messages and files with each other. It also provides a simple way for system administrators to broadcast messages and roll out updates to all workstations connected to the intranet. Most intranet solutions provide a web-based interface for users to access. This interface provides information and tools for employees and team members. It may include calendars, project timelines, task lists, confidential files, and a messaging tool for communicating with other users. The intranet website is commonly called a portal and can be accessed using a custom intranet URL. If the intranet is limited to a local network, it will not respond to external requests. Examples of intranet services include Microsoft SharePoint, Huddle, Igloo, and Jostle. While some services are open source and free of charge, most intranet solutions require a monthly fee. The cost is usually related to the number of users within the intranet. Updated September 23, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/intranet Copy

IOPS

IOPS Stands for "Input/Output Operations Per Second." IOPS is a metric used to measure to performance of a storage device or storage network. The IOPS value indicates how many different input or output operations a device or group of devices can perform in one second. It may be used along with other metrics, such as latency and throughput, to measure overall performance. An IOPS value is typically synonymous with "total IOPS," which includes a mix of read and write operations. However, it is also possible to measure more specific values, such as sequential read IOPS, sequential write IOPS, random read IOPS, and random write IOPS. Higher values mean a device is capable of handling more operations per second. For example, a high sequential write IOPS value would be helpful when copying a large number of files from another drive. SSDs have significantly higher IOPS valued than HDDs. Since SSDs do not have a physical drive head that moves around the drive, they can perform over 1,000 times more read/write operations per second than a typical hard drive. For example, a hard drive that spins at 7200 RPM may have a total IOPS value of 90. A modern SSD may have an IOPS value above 100,000. Some high-end flash drives have IOPS measurements above one million. While IOPS was important when measuring hard drive performance, most real-world situations do not require more than a thousand inputs/outputs per second. Therefore, IOPS is rarely viewed as an important metric in SSD performance. Latency and throughput are the primary factors that affect SSD speed, while storage capacity and durability (lifespan) are also important to consider. Updated May 12, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iops Copy

iOS

iOS iOS is a mobile operating system developed by Apple. It currently runs on the iPhone. It previously ran on the now-discontinued iPod Touch; a version also ran on the iPad, which now runs a separate fork called iPadOS. Since the iPhone is a touchscreen device, the iOS graphical user interface utilizes touch and voice input instead of a keyboard and mouse. You can tap, hold, or swipe your finger on the touchscreen to interact with apps and objects on the screen. Tapping and swiping on certain parts of the screen lets you move between apps and access special system screens. For example, tapping on an app's icon on the Home screen launches that app, swiping down from the top shows the Notification center, and swiping left and right moves between Home screens. You can also use the Siri voice assistant to issue some commands and use speech recognition in the place of keyboard input. Since iOS is designed for mobile devices, it provides a simple user experience and lacks some features found in desktop operating systems. For example, iOS applications must run sandboxed, limiting their access to the hardware and other apps. The iPhone screen only displays the interface for one app at a time; while apps can perform tasks in the background, you cannot interact with more than one at a time. Your access to the file system is also limited to certain folders. However, new features are introduced in every annual update as the operating system matures. iOS includes a suite of first-party software, like productivity tools, messaging apps, information apps, and a web browser. Third-party apps for iOS are exclusively available in its App Store. Updated February 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ios Copy

IP

IP Stands for "Internet Protocol." IP provides a standard set of rules for sending and receiving data over the Internet. It allows devices running on different platforms to communicate with each other as long as they are connected to the Internet. In order for an Internet-connected host to be recognized by other devices, it must have an IP address. This may be either an IPv4 or IPv6 address, but either way, it uniquely defines a device on the Internet. The Internet Protocol also provides basic instructions for transferring packets between devices. However, it does not actually establish the connection or define the ordering of the packets transmitted. These aspects are handled by the Transmission Control Protocol, which works in conjunction with the Internet Protocol to transfer data between systems on the Internet. For this reason, connections between Internet-connected systems are often called "TCP/IP" connections. NOTE: IP may also be short for "IP address," as in "What is your IP?" In this case, IP refers to the unique identifier of a system, not the protocol itself. Updated January 27, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ip Copy

IP Address

IP Address An IP address, or simply an "IP," is a unique address that identifies a device on the Internet or a local network. It allows a system to be recognized by other systems connected via the Internet protocol. There are two primary types of IP address formats used today — IPv4 and IPv6. IPv4 An IPv4 address consists of four sets of numbers from 0 to 255, separated by three dots. For example, the IP address of TechTerms.com is 67.43.14.98. This number is used to identify the TechTerms website on the Internet. When you visit http://techterms.com in your web browser, the DNS system automatically translates the domain name "techterms.com" to the IP address "67.43.14.98." There are three classes of IPv4 address sets that can be registered through the InterNIC. The smallest is Class C, which consists of 256 IP addresses (e.g., 123.123.123.xxx — where xxx is 0 to 255). The next largest is Class B, which contains 65,536 IP addresses (e.g. 123.123.xxx.xxx). The largest block is Class A, which contains 16,777,216 IP addresses (e.g. 123.xxx.xxx.xxx). The total number of IPv4 addresses ranges from 000.000.000.000 to 255.255.255.255. Because 256 = 28, there are 28 x 4 or 4,294,967,296 possible IP addresses. While this may seem like a large number, it is no longer enough to cover all the devices connected to the Internet around the world. Therefore, many devices now use IPv6 addresses. IPv6 The IPv6 address format is much different than the IPv4 format. It contains eight sets of four hexadecimal digits and uses colons to separate each block. An example of an IPv6 address is: 2602:0445:0000:0000:a93e:5ca7:81e2:5f9d. There are 3.4 x 1038 or 340 undecillion) possible IPv6 addresses, meaning we shouldn't run out of IPv6 addresses anytime soon. Updated September 21, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ip_address Copy

iPad

iPad The iPad is a tablet computer developed by Apple. It is smaller than a typical laptop, but significantly larger than the average smartphone. The iPad does not include a keyboard or a trackpad, but instead has a touchscreen interface, which is used to control the device. Like the iPhone, the iPad runs Apple's iOS operating system. This allows the iPad to run third-party apps, which can downloaded from Apple's App Store. While apps designed for the iPhone can also be installed and run on the iPad, many iOS apps are developed specifically for the iPad. Since the iPad's screen is much larger than the iPhone's screen, iPad apps can include more user interface features that would not fit within an iPhone app. Therefore, productivity, graphics, and video-editing apps are often developed specifically for the iPad rather than the iPhone. The iPad's 9.7 in screen size also makes it ideal as an e-reader. The iBooks app allows you to download electronic versions of books from the iBookstore and read them on your iPad. Since the iPad has a full color screen, it supports novels as well as art books and illustrated children's stories. Books can be read one page at a time in portrait mode or with pages side by side in landscape mode. All versions of the iPad include Wi-Fi capability, which can be used for surfing the Web, checking email, and downloading apps directly to the device. Some versions of the iPad also include 3G support for transferring data over a cellular connection, though this capability requires a monthly cellular service contract. While the original iPad did not include a camera, the iPad 2 includes both rear-facing and front-facing cameras. These cameras can be used for video conferencing with other iPad, iPhone, or Mac users via the FaceTime feature. Updated March 9, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ipad Copy

IPC

IPC Stands for "Interprocess Communication." IPC is a feature of modern operating systems that enables processes to communicate with each other. It improves performance by allowing concurrent processes to share system resources efficiently. The two primary methods of interprocess communication are memory sharing and message passing. Memory sharing involves indirect communication since the OS manages RAM usage and allocation. Message passing requires active communication between processes. For example, one process may request exclusive access to a specific resource, such as a file, from another process. Message passing is an effective way to ensure two applications do not access the same block of data at the same time. When actively passing messages between processes, it is important to avoid deadlocks and race conditions. A deadlock may cause a process to become unresponsive, while a race condition may produce errors and unexpected results. IPC Examples Below are some common ways operating systems use interprocess communication: File access - limiting access to files on a local or remote storage device to one process at a time Network communication - ensuring data sent through a network socket does not overlap Shared memory - allowing mulitple processes to use the same block of memory, often through the use of a buffer that dynamically allocates free memory Signals - sending system messages to processes to notify them of an event, similar to an interrupt Updated July 21, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ipc Copy

iPhone

iPhone The iPhone is a smartphone developed by Apple. The first iPhone was released in June, 2007 and an updated version has been released roughly every year since then. While the iPhone was originally only available to AT&T customers, the iPhone 4 was released on the Verizon network in February, 2010. The iPhone has a sleek, minimalist design, and differs from other smartphones in its lack of buttons. Most operations on the iPhone are performed using the touch screen display. The only physical buttons include a sleep/wake button, a mute switch, volume up/down buttons, and a home button. All versions of the iPhone have a rear-facing camera, but the iPhone 4 introduced a front-facing camera, which can be used for video calls made using the FaceTime feature. The iPhone 4 also included the first 960 x 640 pixel "retina display," which has double the resolution of previous iPhone displays. The iPhone 5 and 5s had the same screen resolution as the iPhone 4. The iPhone 6, released in September 2014, provided a larger 4.7" display with 1334x750 pixels. The iPhone 6 Plus, released at the same time, came with an even larger 5.5-inch 1920x1080 pixel display. The iPhone 7 and 7 Plus have the same screen sizes as the 6 and 6 Plus. Internally, the iPhone runs the iOS, an operating system developed by Apple for portable devices. This allows the iPhone to run "apps," or applications developed specifically for the iPhone. Apps can be downloaded from Apple's App Store, which can be accessed through iTunes or directly from the iPhone's built-in App Store app. There are hundreds of thousands of apps available from the App Store, which provide the iPhone with limitless functionality. Examples of iPhone apps include Tech Terms (for this website) and Slang.net, a slang dictionary. Updated September 29, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iphone Copy

iPod

iPod The iPod was a series of portable media players made by Apple. It came in multiple form factors over five separate product lines. An iPod synced music from a person's iTunes music library, allowing them to carry part of their music collection anywhere. The iPod could play a variety of audio formats like MP3 and AAC files, including DRM-protected music from the iTunes Music Store. Some later models could also display photos and play videos. Apple released the first iPod in 2001. Roughly the size of a deck of cards, it was available with either 5 or 10 GB of storage on a small internal hard drive. You controlled it using a mechanical scroll wheel surrounded by several buttons that controlled menu navigation and audio playback. It synced and charged its battery over Firewire and only supported Macs — Windows support did not come until its second generation model. Later revisions upgraded the LCD screen from monochrome to full color, changed from Firewire to USB via a proprietary dock connector, and added support for video playback. Other iPod models followed the original, which was renamed the iPod Classic. The hard drive-based iPod Mini and flash memory-based iPod Nano were smaller versions of the original iPod. The iPod Shuffle removed the screen entirely and could either shuffle songs during playback or play them in order. The touchscreen-equipped iPod Touch was the final model, resembling an iPhone without cellular networking. It could connect to the Internet over Wi-Fi, use built-in apps, and eventually access the iOS App Store. Apple discontinued the iPod line in 2022. It was slowly made obsolete by smartphones that included media playback alongside many other features. Streaming music services that allowed people to access a massive library over a mobile connection became more popular than purchasing individual songs and albums from digital music stores. With those changes, most people no longer needed to carry a dedicated device for their music. Updated November 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ipod Copy

IPS

IPS Stands for "Intrusion Prevention System." An IPS is a network security system designed to prevent malicious activity within a network. It is often used in combination with a network detection system (IDS) and may also be called an intrusion detection and prevention system (IDPS). Like an IDS, an IPS may include hardware, software, or both. It may also be configured for a network or a single system. However, unlike IDSes, intrusion prevention systems are designed to prevent malicious activity, rather than simply detecting it. When an IDS detects suspicious activity, such as numerous failed login attempts, it may log the IP address. If it detects a suspicious data transfer, it may check the packets against a database of known viruses to see if any any malicious code is being transferred. When an intrusion detected, an IDS compares the activity to a set of rules designed to prevent malicious activities from taking place. For example, when an IDS detects numerous failed logins, it may block the IP address from accessing any network devices. If it detects a virus, the data may be quarantined or deleted before it reaches its destination. Examples of hardware-based IPSes include Cisco's IPS 4500 Series, IBM's Security Network "GX" systems, and HP's TippingPoint "NX" devices. Software IPS solutions include Check Point's IPS Software Blade and McAfee's Host Intrusion for Desktop. Updated May 10, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ips Copy

IPsec

IPsec Stands for "Internet Protocol Security." IPsec is a group of protocols that help create secure connections between two devices over a network. It works on top of the standard IP protocol suite to add authentication and encryption to network traffic, whether it travels over a local network or the public Internet. It is most commonly used to establish secure connections for Virtual Private Networks (VPNs), but can help secure data transfers for many purposes. The IPsec protocol suite consists of three protocols that work together to authenticate and encrypt IP traffic: The Internet Key Exchange (IKE) protocol handles negotiations between two devices (also known as hosts) to decide which algorithms to use for authentication and encryption. It also generates a secure, shared cryptographic key that both hosts can use to encrypt and decrypt data. The Authentication Header (AH) protocol adds authentication information to each data packet's header. This information includes a cryptographic hash of the packet's contents that the recipient can use to see whether anything modified the packet during transit. The Encapsulating Secure Payload (ESP) protocol takes a normal IP data packet, encrypts it using the agreed-upon algorithm and cryptographic key, then creates a new data packet with a new header (including the information generated by the AH protocol) that contains the encrypted original packet as its payload. A secure IPsec packet contains an encrypted and authenticated IP data packet These three protocols work together to protect IP traffic end to end. The IKE protocol creates a secure connection between hosts and establishes authentication and encryption details. The ESP protocol then encrypts each data packet, and the AH protocol adds header information that routes the packet to its destination. It also ensures that the receiving device can authenticate the packet and alert the recipient if any other device intercepted it. IPsec Modes The IPsec protocol suite supports two modes for securely transferring data, depending on the type of network it's operating over and what kind of data it's carrying. Tunnel Mode encrypts the entire IP packet and encapsulates it in a new packet. This packet includes an entirely new header with the origin and destination addresses of the tunnel's endpoints —typically the routers that connect directly to the computers involved in the session. The receiving router decrypts the packet, reads its destination address, and forwards it to its destination. Tunnel mode is typically used for VPN connections and when transferring IPsec packets over the public Internet. Transport Mode encrypts only the contents of an IP packet, leaving the original header in place. It appends information from the AH protocol to the header for authentication but does not obscure the addresses of the origin and destination hosts. Transport mode requires slightly less overhead for each packet and is typically used when transferring IPsec packets over a local network. Updated December 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ipsec Copy

IPv4

IPv4 IPv4 is the most widely used version of the Internet Protocol. It defines IP addresses in a 32-bit format, which looks like 123.123.123.123. Each three-digit section can include a number from 0 to 255, which means the total number of IPv4 addresses available is 4,294,967,296 (256 x 256 x 256 x 256 or 2^32). Each computer or device connected to the Internet must have a unique IP address in order to communicate with other systems on the Internet. Because the number of systems connected to the Internet is quickly approaching the number of available IP addresses, IPv4 addresses are predicted to run out soon. When you consider that there are over 6 billion people in the world and many people have more than one system connected to the Internet (for example, at home, school, work, etc.), it is not surprising that roughly 4.3 billion addresses is not enough. To solve this problem, a new 128-bit IP system, called IPv6, has been developed and is in the process of replacing the current IPv4 system. During this transitional process from IPv4 to IPv6, systems connected to the Internet may be assigned both an IPv4 and IPv6 address. Updated July 14, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ipv4 Copy

IPv6

IPv6 Stands for "Internet Protocol version 6." IPv6 is the most recent version of the Internet Protocol (IP). It provides an identification and location system for internet-connected devices, assigning IP addresses to devices that help network equipment route traffic. IPv6 is gradually replacing the previous standard, IPv4, and resolves the problem of IPv4 address exhaustion. Every internet-connected device needs an IP address for identification that other devices can use to send it data. IPv4 uses 32-bit addresses, which allows for almost 4.3 billion unique addresses. While this was sufficient in the early days of the Internet, there are now far more internet-connected devices than possible IPv4 addresses. IPv6 instead uses 128-bit addresses, providing enough address space for 340 undecillion (or 340 trillion trillion trillion) unique IP addresses — enough to be effectively limitless. An IPv6 address consists of 8 groups of 4 hexadecimal digits, which is longer and more complex than an IPv4 address. For example, compare a simple IPv4 address like 192.168.0.47 to a more complex IPv6 address: 2001:0db8:85a3:0000:0000:8a2e:0370:7334 In addition to vastly increasing the available address space, IPv6 adds several other improvements to the IP protocol. It includes built-in support for IPsec security and multicast data flows (both optional features in IPv4). IPv6 uses a simpler packet header to reduce processing overhead during routing. It also simplifies the auto-configuration of network addresses to make it easier for devices to join networks. Finally, it reduces the need for NAT on local networks, which improves reliability for services like VoIP telephony. IPv6 Adoption The rollout of IPv6 began in the mid-2000s as software makers added support to operating systems and network infrastructure. DNS server support began in some countries in 2004. By 2008 it was possible to resolve any domain name using IPv6, and several large companies permanently enabled IPv6 support on their web servers in 2012. However, many networks still operate using IPv4 as the adoption of IPv6 moves at a gradual pace; as of 2023, nearly half of the traffic to Google uses IPv6, up from only 1.5% a decade earlier. Updated July 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ipv6 Copy

IPX

IPX Stands for "Internetwork Packet Exchange." IPX is a networking protocol originally used by the Novell NetWare operating system and later adopted by Windows. IPX was introduced in the 1980s and remained popular through the 1990s. Since then, it has largely been replaced by the standard TCP/IP protocol. IPX is the network layer of the IPX/SPX protocol and SPX is the transport layer. IPX has a similar function to the IP protocol and defines how data is sent and received between systems. The SPX protocol is used to establish and maintain a connection between devices. Together, the two protocols can be used to create a network connection and transfer data between systems. IPX is connectionless, meaning it does not require a consistent connection to be maintained while packets are being sent from one system to another. It can resume the transfer where it left off when a connection is temporarily dropped. IPX only loads when a network connection is attempted, so it does not take up unnecessary resources. NOTE: In the 1990s, popular video games like Quake, Descent, and WarCraft 2 supported IPX for network gaming. A service like Kali could be used to emulate an IPX connection over the Internet, enabling Internet gaming. Now most video games use TCP/IP or their own proprietary protocols to enable gamers to play online. Updated March 5, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ipx Copy

IRC

IRC Stands for "Internet Relay Chat." IRC is a service that allows people to chat with each other online. It operates on a client/server model where individuals use a client program to connect to an IRC server. Popular IRC clients include mIRC for Windows and Textual for OS X. Several web-based clients are also available, including KiwiIRC and Mibbit. In order to join an IRC conversation, you must choose a username and a channel. Your username, also called a handle, can be whatever you want. It may include letters and numbers, but not spaces. A channel is a specific chat group within an IRC network where users can talk to each other. Some networks publish lists of available channels, while others require you to manually enter channel names in order to join them. Channels always begin with a hashtag followed by a name that represents their intended chat topic, such as "#teenchat," "#politics," or "#sports". Some IRC channels require a password while others are open to the public. When you join a channel, the chat window will begin displaying messages people are typing. You can join the conversation by typing your own messages. While channel members can type whatever they want, popular channels are often moderated. That means human operators or automated bots may kick people out of the channel and even ban users who post offensive remarks or spam the channel with repeated messages. While IRC was designed as a public chat service, it supports other features such as private messaging and file transfers. For example, you can use an IRC command (which typically begins with a forward slash "/") to request a private chat session with another user. Then you can use another IRC command to send the user a file from your local system. NOTE: IRC was a popular way for users to connect online before social media became prevalent in the early 2000s. Today, many people still use IRC, but social media sites and apps are much more popular. Updated April 23, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/irc Copy

IRQ

IRQ Stands for "Interrupt Request." An IRQ is a hardware signal sent to a computer's CPU that tells it to stop what it is doing and run a higher-priority process instead. Hardware devices like network cards, storage controllers, and keyboards send IRQs when they need to alert the CPU that they have new information to process. For example, a computer's keyboard sends an IRQ each time the user presses a key to get the CPU to process that keystroke immediately. Each component that can send an IRQ is assigned an IRQ address by the operating system. Early PCs could only support a total of 16 IRQ addresses, which meant that some components had to share an address. Sharing an address often led to conflicts and crashes when two devices tried to send an IRQ at the same time. For example, a computer would typically assign IRQ address 7 to both its sound card and printer port; both devices simultaneously sending an IRQ would freeze the system. Modern computers support hundreds of IRQ channels, so IRQ conflicts are now extremely uncommon. A list of IRQ addresses in Windows All IRQs go through a dedicated interrupt controller that receives incoming interrupts, prioritizes them, and passes them on to the CPU. In modern computers, CPUs use multiple processor cores that each have their own interrupt controller (called the Advanced Programmable Interrupt Controller (APIC)). A motherboard's chipset includes a companion controller (called the I/O APIC) that receives incoming interrupt requests from other hardware devices and distributes them amongst the multiple cores for processing. Updated August 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/irq Copy

Irrational Number

Irrational Number An irrational number is real number that cannot be expressed as a ratio of two integers. When an irrational number is written with a decimal point, the numbers after the decimal point continue infinitely with no repeatable pattern. The number "pi" or π (3.14159...) is a common example of an irrational number since it has an infinite number of digits after the decimal point. Many square roots are also irrational since they cannot be reduced to fractions. For example, the √2 is close to 1.414, but the exact value is indeterminate since the digits after the decimal point continue infinitely: 1.414213562373095... This value cannot be expressed as a fraction, so the square root of 2 is irrational. As of 2018, π has been calculated to 22 trillion digits and no pattern has been found. If a number can be expressed as a ratio of two integers, it is rational. Below are some examples of irrational and rational numbers. 2 - rational √2 - irrational 3.14 - rational π - irrational √3 - irrational √4 - rational 7/8 - rational 1.333 (repeating) - rational 1.567 (repeating) - rational 1.567183906 (not repeating) - irrational NOTE: When irrational numbers are encountered by a computer program, they must be estimated. Updated June 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/irrational_number Copy

ISA

ISA Stands for "Industry Standard Architecture." ISA is a type of bus used in PCs for adding expansion cards. For example, an ISA slot may be used to add a video card, a network card, or an extra serial port. The original 8-bit version of PCI uses a 62 pin connection and supports clock speeds of 8 and 33 MHz. 16-bit PCI uses 98 pins and supports the same clock speeds. The original 8-bit version of ISA was introduced in 1981 but the technology did not become widely used until 1984, when the 16-bit version was released. Two competing technologies -- MCA and VLB -- were also used by some manufacturers, but ISA remained the most common expansion bus for most of the 1980s and 1990s. However, by the end of the twentieth century, ISA ports were beginning to be replaced by faster PCI and AGP slots. Today, most computers only support PCI and AGP expansion cards. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/isa Copy

iSCSI

iSCSI Stands for "Internet Small Computer Systems Interface." iSCSI is an extension of the standard SCSI storage interface that allows SCSI commands to be sent over an IP based network. It enables computers to access hard drives over a network the same way they would access a drive that is directly connected to the computer. iSCSI is a popular protocol used by storage area networks, which allow multiple computers to share multiple hard drives. For example, data centers can be spread out over multiple locations using iSCSI and a standard Internet connection. While the data access time may be slower over the Internet than compared to a direct SCSI connection, iSCSI can serve as a helpful means for creating off-site backups and sharing large amounts of data across multiple locations. Updated July 2, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/iscsi Copy

ISDN

ISDN Stands for "Integrated Services Digital Network." ISDN is a telecommunications technology that enables the transmission of digital data over standard phone lines. It can be used for voice calls as well as data transfers. The first ISDN standard was defined in 1988 by the CCITT organization, which is now the ITU-T (International Telegraph and Telephone Consultative Committee). However, it wasn't until the 1990s that the service became widely used. Since the introduction of ISDN, several variants have been standardized, including the following: Basic Rate Interface (BRI) - supports two 64 kbps bearer channels (or B channels) for a data transfer rate of 128 kbps. Primary Rate Interface (PRI) - supports 30 B channels and two additional channels in a single E1 connection, providing a data transfer rate of 2,048 kbps. Always on Dynamic ISDN (AODI) - an consistent ISDN connection that uses the X.25 protocol and supports speeds up to 2 Mbps. ISDN was a common high-end Internet service in the 1990s and early 2000s and was offered by many ISPs as faster alternative to dial-up Internet access. Many businesses and organizations used ISDN service for both Internet access and network connections between locations. In the mid-2000s, DSL and cable serviced began to replace ISDN connections because of their faster speed and lower cost. Today, ISDN is still used in some network connections, but it is rarely used for Internet access. Updated May 14, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/isdn Copy

ISO

ISO The International Organization for Standardization (ISO) is an international non-governmental organization that develops and publishes standards for products, services, and systems. ISO standards apply to a broad range of industries, including computer and networking technology, manufacturing, agriculture, healthcare, and business services. It draws its membership from standards bodies in more than 160 countries to develop technology and product standards used worldwide. Every standard published by the ISO is identified by a reference number, often (but not always) combined with the year the standard was published or revised. For example, ISO 3758:2012 defines a common set of symbols for laundry care and was last updated in 2012. Standards for information technology are developed through a joint technical committee with the International Electrotechnical Commission (IEC) and are labeled as ISO/IEC standards. For example, ISO/IEC 7498 defines the OSI model that standardizes communication between computer systems, while ISO/IEC 10918 defines the JPEG standard for compressing digital images. Other Uses An ISO file is a disk image file that contains an exact duplicate of a source disk's contents. It comes from the ISO/IEC 9660 standard, which defines the file system used to store data on CD-ROM discs. Discs and disk images formatted to this standard can be read on any operating system that supports the standard, including Windows, macOS, and Unix/Linux. ISO disk images are uncompressed and contain the source disc's entire file system, including directory structure and file metadata. ISO also refers to the level of sensitivity to light in photography. It originally referred to the sensitivity of camera film, but now also applies to the sensitivity of a digital camera's image sensor. ISO is one of the three elements that determine an image's exposure, along with aperture size and shutter speed. Taking a photograph at a low ISO (100 to 200) results in a sharp image with little noise but requires a lot of light; taking one with a high ISO (800 or above) does not require nearly as much scene lighting but does result in a noisier image. NOTE: "ISO" is not officially an abbreviation of the organization's full name in any language. Instead, the short form is derived from the Greek "isos," meaning "equal," to maintain the same short form no matter the language used. File extensions: .ISO Updated November 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/iso Copy

ISP

ISP Stands for "Internet Service Provider." An ISP provides access to the Internet. Whether you're at home or work, each time you connect to the Internet, your connection is routed through an ISP. Early ISPs provided Internet access through dial-up modems. This type of connection took place over regular phone lines and was limited to 56 Kbps. In the late 1990s, ISPs began offering faster broadband Internet access via DSL and cable modems. Some ISPs now offer high-speed fiber connections, which provide Internet access through fiber optic cables. Companies like Comcast and Cox provide cable connections, while companies like AT&T and Verizon provide DSL Internet access. To connect to an ISP, you need a modem and an active account. When you connect a modem to the telephone or cable outlet in your house, it communicates with your ISP. The ISP verifies your account and assigns your modem an IP address. Once you have an IP address, you are connected to the Internet. You can use a router (which may be a separate device or built into the modem) to connect multiple devices to the Internet. Since each device is routed through the same modem, they will all share the same public IP address assigned by the ISP. ISPs act as hubs on the Internet since they are often connected directly to the Internet backbone. Because of the large amount of traffic ISPs handle, they require high bandwidth connections to the Internet. In order to offer faster speeds to customers, ISPs must add more bandwidth to their backbone connection in order to prevent bottlenecks. This can be done by upgrading existing lines or adding new ones. Updated May 29, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/isp Copy

ISR

ISR Stands for "Interrupt Service Routine." An ISR (also called an interrupt handler) is a software process invoked by an interrupt request from a hardware device. It handles the request and sends it to the CPU, interrupting the active process. When the ISR is complete, the process is resumed. A basic example of an ISR is a routine that handles keyboard events, such as pressing or releasing a key. Each time a key is pressed, the ISR processes the input. For example, if you press and hold the right arrow key in a text file, the ISR will signal to the CPU that the right arrow key is depressed. The CPU sends this information to the active word processor or text editing program, which will move the cursor to the right. When you let go of the key, the ISR handles the "key up" event. This interrupts the previous "key down" state, which signals to the program to stop moving the cursor. Similar to Newton's law of inertia (an object in motion tends to stay in motion), computer processes continue to run unless interrupted. Without an interrupt request, a computer will remain in its current state. Each input signal causes an interrupt, forcing the CPU to process the corresponding event. Many types of hardware devices, including internal components and external peripherals can sent interrupts to the CPU. Examples include keyboards, mice, sound cards, and hard drives. A device driver enables communication between each of these devices and the CPU. ISRs prioritize interrupt requests based on the IRQ setting of the device (or port). Typically the keyboard is at the top of the IRQ list, while devices like hard drives are further down. Updated December 7, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/isr Copy

IT

IT Stands for "Information Technology." IT is the general term for the use and management of computing technology in businesses and other organizations. Strictly defined, Information Technology focuses on the storage, retrieval, and processing of information. More frequently, it refers to the department and workers in those organizations responsible for installing and maintaining computers and networks. In addition to managing information, many IT departments are also responsible for their business's telecommunications technology. Corporate email and telephone services are part of IT, as are a business's Internet and internal networking operations. IT Roles Corporate IT departments handle a lot more than simply setting up printers and resetting forgotten passwords. Roles and responsibilities within an IT department are generally organized into three main categories. Operations roles handle the day-to-day IT work like tech support, device management, and system software maintenance. Most workers' interactions with their IT department is with people in this role. Infrastructure roles are responsible for the physical setup and maintenance of IT equipment like network cabling, servers, routers, and individual workstations. In addition to physical hardware, infrastructure roles may also be responsible for writing and updating proprietary software. Governance involves the design of policies and procedures for IT work. Proper governance of an IT system ensures that the services provided match the needs of the business as a whole. Updated November 21, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/it Copy

Iteration

Iteration Iteration is a process in computer programming that repeats a function a set number of times, with the result of each iteration often feeding into the next. Iterative functions run the same code block repeatedly and automatically, processing multiple chunks of data in sequence without redundant code. The most common forms of iteration used in computer programs are while loops and for loops. Loops run repeatedly as long as a specified condition is true. For example, a developer first creates a variable that tracks how many times the function has run, then sets a total number of loops. After each iteration finishes, it increments the tracking variable and runs again until the tracking variable reaches the total number previously set. While the specific syntax of while and for loops varies in each programming language, they often follow a similar pattern. The syntax for these loops in Java looks like this: While loop: while (i < 30) { ... i++; } For loop: for (i=0; i < 30; i++) { ... } Both samples first create a tracking variable, i, and a total number of times for the iteration to run. The while loop includes the incrementation command, i++, at the end of the function, then checks the condition again when it starts over. The for loop creates the tracking variable, the total number of iterations, and how to increment the tracking variable at the start. An while loop repeats as long as its condition is true, and ends once its condition is false Software developers use iterative loops for many common purposes. For example, a developer may use an iterative function to perform calculations on each value in an array, running once for each array address until it reaches the end. A web developer may use a PHP script with an iterative loop to pull lines from a database record, add HTML formatting to display that content in a table, then close the table after getting the final line in the database record. Updated June 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/iteration Copy

ITIL

ITIL Stands for "Information Technology Infrastructure Library." ITIL is a set of recommended procedures and guidelines organizations should follow when using or delivering information technology (IT) services. It provides an organized approach to managing IT services, which is designed to benefit both companies and clients. The five primary stages of the ITIL service lifecycle are listed below. Service strategy - the planning stage of creating or using a new IT service. Examples include building a wireless campus network or a implementing an intranet for members of an organization. This step involves defining the purpose and goals of the service. Service design - the design stage of the service, which may also include the initial implementation. The service must be designed to meet the needs of the end users and be sustainable for the expected duration of the service. Choosing the appropriate technologies is one of the most important aspects of the design stage. Typically scalable solutions are preferred so they can grow with the organization. Service transition - the stage that focuses on transitioning between IT services and IT personnel. This includes the initial transition of rolling out the new service as well moving to other solutions in the future. It also involves creating a plan for who will manage the services if the technologies or personnel change. Service operation - the stage that covers the actual management and operation of the service once it is live. This includes monitoring and maintaining the service as well as developing a plan for fixing issues as they arise. The goal is to provide a reliable service with limited downtime throughout the life of the service. Continual service improvement (CSI) - a plan to continually update and improve the service as needed. In the IT world, the needs of businesses and clients are constantly changing. Therefore a successful service must be flexible and upgradable, adapting to these changing needs. A plan to regularly update the service can help meet this goal. Businesses and organizations are not required to follow the practices outlined by ITIL. However, the recommendations can be helpful to any organization looking to offer or implement an IT-related service. AXELOS, which maintains the ITIL best practices, provides ITIL certifications to individuals who have developed a comprehensive understanding of ITIL. Updated March 1, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/itil Copy

ITSM

ITSM Stands for "Information Technology Service Management." ITSM is the high-level strategic management of an organization's Information Technology Services. It covers IT management at all of its stages — starting from the initial design all the way through implementation and delivery. ITSM is focused more on the processes and workflows used to deliver IT services to end users than the technology and tools. The foundation of ITSM relies on frameworks that outline best practices for IT. One notable framework is the Information Technology Infrastructure Library (ITIL), which supplies an organization- and technology-neutral baseline for providing IT services. Since this framework outlines what to do and how to do it, but not what tools to use, it can apply to an in-house enterprise IT system or a cloud-based IT-as-a-Service (ITaaS) implementation. ITSM tools are usually offered as software suites, allowing IT managers to tailor the tools used to the needs of their IT department. These tools provide a dashboard, or "service desk," which serves as the primary point of contact for an organization's users when they experience IT problems. The suites also supply other tools that an organization can customize based on their needs, like workflow management, cloud storage, asset tracking, knowledge databases, and license management. Updated November 1, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/itsm Copy

iTunes

iTunes iTunes was a media playback application developed by Apple Inc. The first version launched in 2001 and was based on SoundJam MP, a third-party music player for macOS that Apple acquired in 2000. Apple refined SoundJam's interface, added features, and rebranded it as iTunes, with much of its development handled by the original SoundJam team. iTunes allowed users to import music from CDs, add audio files from their hard drives, and organize all their songs in a single iTunes Library. In 2003, Apple launched the iTunes Music Store, which allowed users to purchase and download songs and albums to their music library. iTunes also supported other media, including audiobooks, podcasts, movies, and TV shows. Apple iTunes interface in 2017 While iTunes was primarily a music player, it also served as a central hub for mobile devices like iPods and iPhones. The app made it easy to sync music from a Mac or PC (using iTunes for Windows) with an iPods or iPhone. Syncing initially required a Firewire connection, but later supported USB and Wi-Fi. iTunes also supported playlists, including custom song collections and smart playlists that dynamically updated based on criteria such as genre, artist, or play count. For instance, a smart playlist could automatically collect all songs labeled as "Rock" or those you hadn't listened to recently. Later versions of the app included iTunes Radio, which offered free streaming from Internet radio stations. In 2019, Apple began phasing out iTunes and replaced it with individual apps, including Apple Music, Apple TV, and Apple Podcasts. Syncing is now handled automatically through iCloud, though you can still perform manual syncs with Apple Music. Updated October 3, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/itunes Copy

IVR

IVR Stands for "Interactive Voice Response." IVR is a telephony technology that interacts with callers through touch-tone keypresses and voice input, allowing users to navigate menus and access information systems over the phone. While early IVR systems simply routed calls, modern ones are more sophisticated, leveraging natural language processing (NLP) to better understand spoken commands. These systems can handle routine queries and provide automated customer service without a human operator. Chances are you've interacted with IVR when calling customer support about a product or service. For example, your bank may employ IVR when you call in to check your account balance, view recent transactions, or make transfers. Credit card companies, utility providers, and stock brokerages also provide IVR for phone-based account management tasks. IVR systems also facilitate appointment scheduling, automated surveys, movie showtime inquiries, and intelligent call routing in call centers. A typical IVR system features a series of menus with prerecorded prompts that guide users through various options. While many selections are straightforward, involving simple number presses, others may require you to provide specific information, such as your name or account number. Advanced IVR systems now utilize voice biometrics to authenticate users securely, leading to faster service. In today's digital landscape, IVR systems are increasingly integrated with artificial intelligence, enhancing their capability to understand conversational input and creating a more human-like interaction. Still, the quality of the experience can vary depending on how effectively the system recognizes and processes voice commands, especially in more complex scenarios. Updated October 24, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ivr Copy

Jailbreak

Jailbreak Jailbreaking is a process that removes restrictions placed on a device by the manufacturer, often by exploiting a software bug. It most commonly refers to granting root access on an iPhone or iPad, but it also can apply to other devices like game consoles. Users typically jailbreak their devices to install applications from outside sources, bypassing official app stores. The jailbreaking process requires exploiting a vulnerability on the device, usually in its firmware or operating system. Once hackers discover an exploitable vulnerability for a particular device, they make and distribute a software tool to automate the process. Since device manufacturers routinely patch these vulnerabilities, each jailbreaking method usually only works on a single device generation or operating system version. Most methods involve connecting the device to a computer by a USB cable, running the jailbreaking tool on the computer, and restarting the device when prompted. The jailbreaking tool exploits the vulnerability it was designed for and provides the user with root access. Most will also install a package manager that the user can use to install applications. Jailbreaking a device does come with some drawbacks. First, any operating system or firmware update will remove the jailbreak, making the user choose between up-to-date software or maintaining root access. Second, installing software from unknown sources may also introduce malware. A jailbreak's modifications to the system software may also lead to instability and crashes. Finally, some jailbreak methods require the device to restart while connected to a computer running the jailbreak tool to maintain it and are known as "tethered" jailbreaks. NOTE: When removing these restrictions on an Android device, the process is known as "rooting." However, the process is generally much easier, as most Android device manufacturers openly allow advanced users to root their devices. Updated October 20, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/jailbreak Copy

Java

Java Java is a high-level programming language developed by Sun Microsystems. It was originally designed for developing programs for set-top boxes and handheld devices but later became a popular choice for creating web applications. The Java syntax is similar to C++, but is strictly an object-oriented programming language. For example, most Java programs contain classes, which are used to define objects, and methods, which are assigned to individual classes. Java is also known for being more strict than C++, meaning variables and functions must be explicitly defined. This means Java source code may produce errors or "exceptions" more easily than other languages, but it also limits other types of errors that may be caused by undefined variables or unassigned types. Unlike Windows executables (.EXE files) or Macintosh applications (.APP files), Java programs are not run directly by the operating system. Instead, Java programs are interpreted by the Java Virtual Machine, or JVM, which runs on multiple platforms. This means all Java programs are multiplatform and can run on different platforms, including Macintosh, Windows, and Unix computers. However, the JVM must be installed for Java applications or applets to run at all. The JVM is included as part of the Java Runtime Environment (JRE). NOTE: Oracle acquired Sun Microsystems in January 2010. Java is now maintained and distributed by Oracle. File extensions: .JAVA, .JAV, .JAD, .JAR, .JSP, .CLASS Updated April 19, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/java Copy

JavaScript

JavaScript JavaScript is a programming language commonly used in web development. It was originally developed by Netscape as a means to add dynamic and interactive elements to websites. While JavaScript is influenced by Java, the syntax is more similar to C and is based on ECMAScript, a scripting language developed by Sun Microsystems. JavaScript is a client-side scripting language, which means the source code is processed by the client's web browser rather than on the web server. This means JavaScript functions can run after a webpage has loaded without communicating with the server. For example, a JavaScript function may check a web form before it is submitted to make sure all the required fields have been filled out. The JavaScript code can produce an error message before any information is actually transmitted to the server. Like server-side scripting languages, such as PHP and ASP, JavaScript code can be inserted anywhere within the HTML of a webpage. However, only the output of server-side code is displayed in the HTML, while JavaScript code remains fully visible in the source of the webpage. It can also be referenced in a separate .JS file, which may also be viewed in a browser. Below is an example of a basic JavaScript function that adds two numbers. The function is called with the parameters 7 and 11. If the code below were included in the HTML of a webpage, it would display the text "18" in an alert box. JavaScript functions can be called within tags or when specific events take place. Examples include onClick, onMouseDown, onMouseUp, onKeyDown, onKeyUp, onFocus, onBlur, onSubmit, and many others. While standard JavaScript is still used for performing basic client-side functions, many web developers now prefer to use JavaScript libraries like jQuery to add more advanced dynamic elements to websites. Updated August 8, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/javascript Copy

JDBC

JDBC Stands for "Java Database Connectivity." JDBC is an API that allows Java applications to connect to and query a wide range of databases. Examples include Java DB, Oracle, MySQL, PostgreSQL, DB2, Sybase ASE, and Microsoft SQL Server. JDBC makes it possible for a software developer to run SQL queries within a Java application. The database connection and any required query translations are handled by the JDBC driver. For example, the same Java method can be used to query a MySQL database and a Microsoft SQL Server database. The goal is to provide developers with "write once, run anywhere" functionality, making it easy to work with different types of databases. In order for an application to use JDBC, the appropriate driver must be installed. Examples include the JDBC Thin driver and the JDBC OCI (Oracle Call Interface) driver. The driver files are available as Java archives (.JAR files), which can be referenced by a Java application. Each Java archive contains .CLASS files that allow Java apps to communicate with different types of databases. Individual classes can be removed to reduce the disk space required by the corresponding Java app. JDBC drivers are maintained and provided by Oracle, which took over the development of Java after acquiring Sun Microsystems in 2010. What is the difference between JDBC and ODBC? JDBC is designed specifically for Java applications, while ODBC is language independent. That means the ODBC API is available for multiple programming languages, while JDBC is only available for Java. A "bridge" can be used to translate commands between the two APIs. For example, an ODBC-JDBC bridge translates ODBC function-calls into JDBC method-calls, allowing them to be processed by a JDBC driver. A JDBC-ODBC driver converts JDBC method calls into ODBC function calls, allowing them to work with an ODBC driver. Updated December 23, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jdbc Copy

JFS

JFS Stands for "Journaled File System." JFS is a 64-bit file system created by IBM. The initial version of JFS (also called JFS1) was developed for IBM's AIX operating system and was released in 1990. In 2001, IBM released JFS2 (the Enhanced Journaled File System), as well as a version of JFS that is compatible with the Linux operating system. The "journaled" aspect of JFS means that the file system keeps tracks of changes to files and folders in a log file (or journal). This log can be used to backtrack certain changes in case of an unexpected power failure or system crash, which may prevent data corruption. For example, if a file is in the process of being moved or deleted when a computer crashes, the journal can be used to restore the file to its last stable state. Without the journal, the file may be truncated, which would make it unreadable and could produce other file system errors. The Enhanced Journaled File System (JFS2) is similar to JFS, but supports much larger volumes and file sizes. For example, a hard disk formatted using JFS can be a maximum of one terabyte (TB) in size, while a JFS2-formatted disk can be up to 32 TB. JFS's maximum file size is slightly less than 64 gigabytes (GB), while JFS2 supports file sizes up to 16 TB. This large file capacity is especially important for storing large databases, which are often contained in a single file. NOTE: JFS2 is supported by AIX 5.1 and later. It is also supported on Linux systems that have the "jfsutils" package installed. Updated January 3, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jfs Copy

Jitter

Jitter Jitter, in networking, refers to small intermittent delays during data transfers. It can be caused by a number of factors including network congestion, collisions, and signal interference. Technically, jitter is the variation in latency — the delay between when a signal is transmitted and when it is received. All networks experience some amount of latency, especially wide area networks that span across the Internet. This delay, typically measured in milliseconds, can be problematic for real-time applications, such as online gaming, streaming, and digital voice communication. Jitter makes this worse by producing additional delays. Network jitter causes packets to be sent at irregular intervals. For example, there may be a delay after some packets are sent and then several packets may be sent all at once. This may cause packet loss if the receiving system is unable to process all the incoming packets. If this happens during a file download, the lost packets will be resent, slowing down the file transfer. In the case of a real-time service, like audio streaming, the data may simply be lost, causing the audio signal to drop out or decrease in quality. The standard way to compensate for network jitter is to use a buffer that stores data before it is used, such as a few seconds of an audio or video clip. This will smooth out the playback of the media since it gives the receiving computer a few seconds to receive any packets lost due to jitter. While buffers are an effective solution, they must be extremely small when used in real-time applications such as online gaming and video conferencing. If the buffer is too large (greater than 10 ms), it will cause a noticeable delay. Updated February 7, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jitter Copy

Joystick

Joystick A joystick is an input device commonly used to control video games. Joysticks consist of a base and a stick that can be moved in any direction. The stick can be moved slowly or quickly and in different amounts. Some joysticks have sticks that can also be rotated to the left or right. Because of the flexible movements a joystick allows, it can provide much greater control than the keys on a keyboard. Joysticks typically include several buttons as well. Most joysticks have at least one button on the top of the stick and another button in the front of the stick for the trigger. Many joysticks also include other buttons on the base that can be pressed using the hand that is not guiding the stick. Joysticks typically connect to your computer using a basic USB or serial port connection and often come with software that allows you to assign the function of each button. Since joysticks emulate the controls of planes and other aircraft, they are best suited for flight simulators and flying action games. However, some gamers like to use joysticks for other types of video games, such as first-person shooters and fighting games. Others prefer using the basic keyboard and mouse, with which they are already accustomed to. Updated November 2, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/joystick Copy

JPEG

JPEG Stands for "Joint Photographic Experts Group." JPEG is a popular image file format. It is commonly used by digital cameras to store photos since it supports 224 or 16,777,216 colors. The format also supports varying levels of compression, which makes it ideal for web graphics. The 16 million possible colors in a JPEG image are produced by using 8 bits for each color (red, green, and blue) in the RGB color space. This provides 28 or 256 values for each of the three colors, which combined allow for 256 x 256 x 256 or 16,777,216 colors. Three values of 0 produce pure black, while three values of 255 create pure white. The JPEG compression algorithm may reduce the file size of a bitmap (BMP) image by ten times with almost no degradation in quality. Still, the compression algorithm is lossy, meaning some image quality is lost during the compression process. For this reason, professional digital photographers often choose to capture images in a raw format so they can edit their photos in the highest quality possible. They typically export the pictures as JPEG (.JPG) images when they are shared or published on the web. Besides image data, JPEG files may also include metadata that describes the contents of the file. This includes the image dimensions, color space, and color profile information, as well as EXIF data. The EXIF data is often "stamped" on the image by a digital camera and may include the aperture setting, shutter speed, focal length, flash on/off, ISO number, and dozens of other values. Disadvantages of the JPEG Format While the JPEG format is great for storing digital photos, it does have some drawbacks. For example, the lossy compression can cause an issue called "artifacting," in which parts of the image become noticeably blocky. This typically happens when a high compression setting is used to save the image. For saving small images and images with a lot of text, the GIF format is often a better alternative. JPEG images also don't support transparency. Therefore, the JPEG format is a poor choice for saving non-rectangular images, especially if they will be published on webpages with different background colors. The PNG format, which supports transparent pixels is more ideal for these types of images. NOTE: The Joint Photographic Experts Group, which the JPEG image format is named after, published the first JPEG specification in 1992. Since then, the organization has developed several variations of the format, including JPEG 2000 and JPEG XR. However, the standard JPEG format remains the most popular. File extensions: .JPG, .JPEG, .JFIF, .JPX, .JP2 Updated July 26, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jpeg Copy

jQuery

jQuery jQuery is a JavaScript library that allows web developers to add extra functionality to their websites. It is open source and provided for free under the MIT license. In recent years, jQuery has become the most popular JavaScript library used in web development. To implement jQuery, a web developer simply needs to reference the jQuery JavaScript file within the HTML of a webpage. Some websites host their own local copy of jQuery, while others simply reference the library hosted by Google or the jQuery server. For example, a webpage may load the jQuery library using the following line within the section of the HTML: Once the jQuery library is loaded, a webpage can call any jQuery function supported by the library. Common examples include modifying text, processing form data, moving elements on a page, and performing animations. jQuery can also work with Ajax code and scripting languages, such as PHP and ASP to access data from a database. Since jQuery runs on the client side (rather than the web server), it can update information on a webpage in realtime, without reloading the page. A common example is "autocomplete," in which a search form automatically displays common searches as you type your query. In fact, this is how TechTerms.com provides search suggestions when you type in the search box. Besides its free license, the other main reason jQuery has gained such popularity is its cross-browser compatibility. Since each browser renders HTML, CSS, and JavaScript differently, it can be difficult for a web developer to make a website appear the same across all browsers. Instead of having to write custom functions for each browser, a web developer can use a single jQuery function that will work in Chrome, Safari, Firefox, and Internet Explorer. This multi-browser support has led many developers to switch from standard JavaScript to jQuery, since it greatly simplifies the coding process. You can read more about jQuery and download the latest library at the official jQuery website. Updated March 1, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jquery Copy

JRE

JRE Stands for "Java Runtime Environment." JRE is a software layer that allows a computer's operating system to run Java applications and applets. It includes several components that Java apps require before they can function, including a virtual machine and code resource libraries. A Java Runtime Environment is available for most computer platforms, including Windows, macOS, and Unix. Java is a popular programming language for creating cross-platform applications, but most operating systems do not include native Java support. Instead, users must install the Java Runtime Environment before Java apps can run on their system. The JRE packages together several necessary components. First, it includes a standard set of Java class libraries that contain common code snippets that Java applications can use. It also includes a Java Virtual Machine (JVM), which executes the bytecode within compiled Java applications while working with the operating system to provide Java apps access to memory and other system resources. The JRE is available for download from Oracle, the current owner of the Java platform, for Windows, macOS, and Unix. It was formerly available as a web browser plug-in that allowed users to run Java applets in the browser, but the plug-in was discontinued in 2016. The JRE is also available as part of the Java Development Kit (JDK) along with a debugger, compiler, and other tools that help developers create Java apps. Updated July 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/jre Copy

JSF

JSF Stands for "JavaServer Faces." JSF is a framework that allows Web developers to build user interfaces for JavaServer applications. It is supported by Web servers running Java Enterprise Edition (Java EE). JSF simplifies the creation of Web applications by providing a standard set of tools (or an API) for building user interfaces. For example, instead of coding a Web form in HTML, a developer can instead call a simple JSF function that generates the form. Another JSF function may be used to process the data entered by the user. These functions are processed on the server and the resulting data is output to the client's browser. JSF benefits developers by providing reusable objects that can easily be inserted into webpages. However, these components are also beneficial to website visitors since they produce standardized interface elements. Since the Java code is processed on the server, the appearance of the generated Web objects is consistent across multiple websites. Additionally, JSF components are tested on multiple platforms, so they work well in all major browsers. While JSF is often used to create basic webpage elements, it also supports advanced features, such as database access, Ajax interaction, and JavaScript page actions. These capabilities are useful for building dynamic websites that generate pages on-the-fly. Updated January 18, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jsf Copy

JSON

JSON Stands for "JavaScript Object Notation." JSON is a standard text-based data interchange format. It formats data in a way that is both human- and machine-readable. Web applications and web servers use JSON files to transfer data between applications and servers. While JSON is based on JavaScript, it is a language-independent format that is supported by many programming languages. JSON is similar to XML (another plain text data interchange format) but is more compact because it does not require separate tags for each element. Instead, JSON contains objects within curly brackets ({}). Each attribute has a name and value, separated by colons (:). A value can consist of a string within quotation marks, a number, a boolean (true or false), another object, or an array contained within square brackets ([]). Attribute name/value pairs are separated by commas (,). For example, below is an example of a book defined in JSON and XML. JSON {  "book": {   "title": "A Study in Scarlet",   "author": "Arthur Conan Doyle",   "publisher": {    "name": "Ward Lock & Co", "year": "1887", "country": "United Kingdom"   }  } } XML  A Study in Scarlet  Arthur Conan Doyle     Ward Lock & Co   1887   United Kingdom   Even though the JSON and XML objects use roughly the same number of lines to define the book's information, the JSON code uses a lot less text and is easier to read. It also requires less data to store and transfer. These data savings might be negligible for a single data file, but JSON's low overhead can add up to reduce bandwidth and other system resources for web applications that use a lot of data files. File extension: .JSON Updated April 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/json Copy

JSP

JSP Stands for "Jakarta Server Pages," (formerly Java Server Pages). JSP is a technology developed by Sun Microsystems for creating dynamic webpages. JSP allows developers to embed Java code within an HTML page, creating a mix of static content and dynamic behavior. JSP is similar to other server-side programming and scripting terminologies like ASP and PHP. Web developers typically use JSP for web applications that require server-side rendering of dynamic content. JSP pages can contain a mix of static HTML content and Java code, which can accept input from a user or pull information from a database. When the user submits a form or other triggering action, the Java code is executed by the web server and compiled into a small application known as a servlet. The server processes input from the user or database, generates a new HTML page using the generated output, and sends it to the user. JSP executes Java code entirely on the web server, returning only an HTML page with dynamic content JSP was widely used for web application development in the past, but other web technologies are taking its place and gaining popularity. Newer Java template engines like Thymeleaf are more flexible and scalable than legacy JSP technology for executing server-side applications. JavaScript frameworks like React, which run the application on the client side instead of the server side, are more capable and responsive than they used to be; they also tend to be easier for developers to use for many cases of web app. JSP technology is still used to maintain legacy systems, but is uncommon in new app development. Updated June 29, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/jsp Copy

Jumper

Jumper A jumper is a small metal connector that acts as an on/off switch. Multiple jumpers are often used together to configure settings for a hardware device. Some jumpers are encased in a plastic switch, that can be toggled on or off. Other jumpers are plastic sleeves with metal linings that connect two metallic prongs together. When the sleeve is applied, the connection is on and it when it is removed, the connection is off. Jumpers are found in computer hardware as well as other types of electronic devices. A motherboard, for instance, may include jumpers that are used to enable different types of components. Hard drives commonly include jumpers that can enable or disable different features. For example, one jumper setting might enable spread spectrum clocking (SSC) while another setting might enable PHY or "physical layer" mode. These settings may be required for the hard drive to work with specific types of hardware. Jumpers are also found in common household electronics, such as remote controls. For example, a garage door remote may include a row of jumpers. These jumpers must match the settings of the garage door receiver in order for it to work. Ceiling fan remotes often include jumpers that can be customized to match the settings for a specific fan. Modifying the jumper settings of a remote typically changes the frequency on which it communicates. This allows you to use different frequencies for different devices. While altering jumpers might seem like a high-tech process, it is usually very easy. Remotes, for example, typically have jumpers right next to the battery so they can easily be switched on or off. Just make sure that you know how to correctly modify the jumper settings before changing them. If you modify the wrong jumpers, the device might not work. Updated September 5, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jumper Copy

JVM

JVM Stands for "Java Virtual Machine." A JVM is a software-based machine that runs Java programs. It can be installed on several different operating systems, including Windows, OS X, and Linux. JVMs allow Java apps to run on almost any computer. A Java virtual machine processes instructions similar to a physical processor. However, the Java code from a .JAVA file must first be converted into instructions the JVM can understand. This binary format, called "bytecode," can be processed one instruction at a time or compiled into a .CLASS file before execution to improve performance. While Java apps are platform independent (meaning they can run on different platforms), not all Java programs are compatible with all Java virtual machines. JVMs are updated regularly with new features and support for new instructions. Therefore Java often require a minimum JVM version in order to run. NOTE: The terms JVM and JRE (Java runtime environment) are often used synonymously. Technically, however, the JVM is part of a JRE, which also includes libraries of functions and other files that Java programs can reference. Updated March 23, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/jvm Copy

JWT

JWT Stands for "JSON Web Token." A JWT is an industry-standard token that shares security information between two parties. JWTs are compact and URL-safe, which allows web servers and clients to exchange them as part of an HTTPS request. The most common use for a JWT is for authenticating a user signed in to a website or web app, often through a single sign-on (SSO) service. JWTs consist of three parts: The header includes information about the type of token and the algorithm used to generate the signature. The payload includes the list of claims made by the token. These claims typically include information about the token issuer, the website or web app using the token, and the user. They also include the time the token was issued and when it will expire (both using epoch time). In addition to these common values, a JWT may include arbitrary claims unique to the website or web app issuing the token. The signature verifies that the token hasn't been tampered with since being issued. Websites and web apps often use JWTs as a stateless method of authenticating a user. They do not require that the server maintain a session in its memory or database, which reduces the overhead required for each user. Instead, the client and server trade the JWT back and forth with each HTTPS request, verifying the signature to ensure the token hasn't been altered. Once the JWT expires, the user must sign back in for a new one to be issued. Creating a JWT Both the header and the payload of a JWT consist of text written in JSON. The header and payload are each encoded separately using an algorithm called base64, turning several lines of text into a single string. The encoded header and payload strings are combined, separated by a '.' character. The header and payload are encoded and combined into a string, which is hashed and used as the signature The combined string, containing the encoded header and payload, is encoded once again using a hash algorithm and a secret key to create the signature. The signature is then appended to the end of the header and payload, separated by another '.' character, to create the final JWT. Updated May 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/jwt Copy

Kbps

Kbps Stands for "Kilobits Per Second." 1 Kbps is equal to 1,000 bits per second. That means a 300 Kbps connection can transfer 300,000 bits in one second. 1,000 Kbps is equal to 1 Mbps. Kbps is primarily used to measure data transfer rates. For example, dial-up modems were rated by their maximum download speeds, such as 14.4, 28.8, and 56 Kbps. Throughout the 1990s and early 2000s, Kbps remained the standard way to measure data transfer dates. However, broadband connections such as cable and DSL now offer speeds of several megabits per second. Therefore, Mbps is more ubiquitous than Kbps. NOTE: The lowercase "b" in Kbps is significant. It stands for "bits," not bytes (which is represented by a capital "B"). Since there are eight bits in one byte, 400 Kbps is equal to 400 ÷ 8, or 50 KBps. Because data transfer speeds have traditionally been measured in bps, Kbps is more commonly used than KBps. Updated October 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kbps Copy

KDE

KDE Stands for "K Desktop Environment." KDE is a contemporary desktop environment for Unix systems. It is a Free Software project developed by hundreds of software programmers across the world. Both the KDE source code and the software itself are made freely available to the public. KDE's primary benefit is the modern graphical user interface GUI it provides for Unix workstations. While Unix systems are notoriously difficult for novice users to operate, KDE makes it possible for the average user to work on a Unix system. In addition to the modern interface, KDE also includes several user-friendly features, such as an application help system and standardized menus and toolbars. It also supports the ability to customize the interface with various skins or themes. Another important aspect of the K Desktop Environment is its application development framework, which is what software engineers use to develop programs for KDE. Since a desktop environment is only as useful as the applications available for it, it is important that developing software for the environment is not a tedious process. Therefore, the KDE application development framework has been designed to help programmers develop robust applications in a simple and efficient manner. This has led to the development of KOffice and hundreds of high-quality programs for KDE. Updated December 29, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kde Copy

Keep-Alive

Keep-Alive Keep-Alive is an HTTP header that allows a web server to use a single connection for multiple requests from a web browser. Servers running HTTP/1 often have keep-alive turned on to improve website performance. The keep-alive header is not used in HTTP/2 since it is the default behavior of the HTTP/2 protocol. When Keep-Alive is enabled on a web server, it creates persistent connections between the server and clients (website visitors). The TCP connection stays open until it is closed or times out. Since each TCP connection must complete a handshake process, multiple connections increase the page load time. Keep-Alive provides a way for browsers to download all webpage assets, such as images and CSS files, over a single connection. The disadvantage of Keep-Alive is that it requires more system resources from the web server. If websites on a server receive a lot of traffic, it may have several — possibly several thousand — persistent connections open at once. Eventually, the server may not be able to handle new connections and become unresponsive. Apache provides the following directives to prevent servers from reaching maximum capacity: KeepAliveTimeout - the maximum time a persistent connection can stay open while waiting for new requests MaxKeepAliveRequests - the maximum number of requests allowed within a single connection, before it needs to be reset To enable keep-alive on an Apache server, add the following code to a server-wide include file or the .HTACCESS file of a specific website: Header set Connection keep-alive Updated May 27, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/keep-alive Copy

Kernel

Kernel A kernel is the foundational layer of an operating system (OS). It functions at a basic level, communicating with hardware and managing resources, such as RAM and the CPU. Since a kernel handles many fundamental processes, it must be loaded at the beginning of the boot sequence when a computer starts up. The kernel performs a system check and recognizes components, such as the processor, GPU, and memory. It also checks for any connected peripherals. As the OS loads and the graphical user interface appears, the kernel keeps running. Even after the OS has fully loaded, the kernel continues to run in the background, managing system resources. Types of Kernels Several types of kernels exist, but two popular ones include monolithic kernels and microkernels. A monolithic kernel is a single codebase, or block of source code, that provides all the necessary services offered by the operating system. It is a simplistic design and creates a well-defined communication layer between the hardware and software. Microkernels have the same function as monolithic kernels, but they are designed to be as small as possible. Instead of managing all the resources from a single codebase, the kernel handles only the most basic functions. It uses modules or "servers" to manage everything else. For example, device drivers are typically included in a monolithic kernel, but they would be split into separate modules in a microkernel. This design is more complex, but it can provide a more efficient use of system resources and helps protect against system crashes. Kernel Panics Since the kernel handles the most basic functions of a computer, if it crashes it can take down the entire computer. This undesirable event is called a "kernel panic" on macOS and Unix systems. It is similar to the blue screen of death in Windows. The only way to recover from a kernel panic is to restart your computer. NOTE: Kernel panics are often caused by hardware communication issues. Therefore, if your computer is producing repeated kernel panics, try unplugging unnecessary devices to see if that fixes the problem. Updated January 13, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kernel Copy

Kerning

Kerning Kerning is a term in typography that refers to the spacing between two individual characters. It reduces the optical space between two letters by allowing parts of letters to overlap vertically. Kerning enhances readability by making text appear more natural. Without kerning, each letter would take up a solid block of space as if it were within an invisible rectangle. However, some letter shapes have angled or horizontal strokes that leave extra space in their top or bottom corners. For example, if an A and a V are placed next to each other without kerning, they will appear as if a space is between them. Kerning allows them to vertically overlap and fit together, taking up less space and appearing more natural. Digital fonts include a kerning table that defines the proper spacing between every possible letter combination, allowing software to automatically apply kerning where necessary. Word processors and desktop publishing software allow you to override those values and manually adjust kerning. Good kerning also changes with font size — smaller font sizes need more space between letters to remain legible, letting you kern text more tightly at larger sizes. NOTE: Kerning should not be confused with tracking. Kerning adjusts the spacing between individual pairs of letters, while tracking applies to an entire block of text. Updated February 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/kerning Copy

Keyboard

Keyboard A keyboard is the most common computer input device and is the primary way a computer's user enters text. Most computers use both a keyboard and a mouse or trackpad for user input, with the keyboard responsible for text input and the mouse or trackpad responsible for moving the cursor and interacting with objects on the screen. A keyboard may connect to a computer over USB or Bluetooth; laptop computers include built-in keyboards below the screen. Each key on a keyboard includes at least one symbol representing the character that appears when you press that key. Every keyboard includes keys for letters, numbers, and common punctuation symbols, but exact keyboard layouts vary by region and language. Keyboards also provide keys that let you navigate around a text document, including a set of arrow keys. Even though some letters and symbols move from country to country, most keyboards use the QWERTY design that gets its name from the first six letters in the upper-left corner. A basic keyboard with a 10-key keypad on the right, and special media keys above the Function key row In addition to letters, numbers, and symbols, keyboards also include several modifier keys. When pressed with a letter, number, or symbol key, a modifier key modifies the signal sent to the computer. For example, pressing and holding the Shift key while typing changes typed letters from lowercase to uppercase and inserts other keys' alternate symbols. Using other modifier keys (like Command or Control) with certain keys instead triggers specific keyboard shortcuts. For example, pressing Command + S on macOS or Control + S on Windows triggers the universal keyboard shortcut for saving the active document. Keyboards also include a row of Function keys above the numbers row, which perform specific system tasks when pressed. Many keyboards include special media playback keys that let you adjust system volume, play and pause, or skip tracks. Some keyboards include a 10-key keypad on the right side, but many keyboards omit this section to take up less space. A few keyboards even include special customizable keys that let you record your own keyboard shortcuts or run short macro scripts. NOTE: Touchscreen devices like smartphones and tablets do not require hardware keyboards since they can use virtual software keyboards that appear on the screen when necessary. However, you can pair a Bluetooth wireless keyboard to a phone or tablet if you want to type with a physical keyboard. Updated July 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/keyboard Copy

Keyboard Shortcut

Keyboard Shortcut A keyboard shortcut is a combination of keyboard keys that, when pressed simultaneously, performs a specific command. A shortcut typically combines one or more modifier keys with another key. Many keyboard shortcuts are universal across all apps on a platform, like Control + S to save a file on Windows or Command + S to save on macOS. Using a keyboard shortcut to perform a command can save you significant time since you don't need to navigate through an app's menus and dialog boxes. Most common keyboard shortcuts combine one modifier key and one other key, typically a letter, number, or punctuation symbol. In some cases, adding a second or third modifier key to a keyboard shortcut changes it to perform a similar function. For example, in macOS, pressing Command + S saves a file, while adding the Shift key to that shortcut opens the Save As dialog box instead. In addition to the universal set of shortcuts, most applications also include a unique set of keyboard shortcuts. You can find many of an app's keyboard shortcuts by opening menus in the app's menu bar. If a command has a keyboard shortcut assigned to it, that shortcut will appear next to the command's name. Several keyboard shortcuts for commands in the TextEdit Font menu In many cases, the same keyboard shortcut triggered in different apps will perform different commands, so it's wise to consult menus or other documentation before using a new shortcut. For example, on macOS, pressing Command + D in Safari creates a bookmark of the current page, while that same shortcut in Microsoft Word instead opens the Font properties dialog box. To view a list of the most common keyboard shortcuts in macOS and Windows, view the pages below: macOS Keyboard Shortcuts | Windows Keyboard Shortcuts Updated October 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/keyboard_shortcut Copy

Keylogger

Keylogger A keylogger is a program that records the keystrokes on a computer. It does this by monitoring a user's input and keeping a log of all keys that are pressed. The log may be saved to a file or even sent to another machine over a network or the Internet. Keylogger programs are often deemed spyware because they usually run without the user knowing it. They can be maliciously installed by hackers to spy on what a user is typing. By examining the keylog data, it may be possible to find private information such as a username and password combination. Therefore, keyloggers can be a significant security risk if they are unknowingly installed on a computer. The best way to protect yourself from keylogger programs is to install anti-virus or security software that warns you when any new programs are being installed. You should also make sure no unauthorized people have access to your computer. This is especially true in work environments. You can also periodically check the current processes running on your computer to make sure no keyloggers or other malware programs are active. While it is unlikely that you have a keylogger programs installed on your computer, it is definitely worth it to check. Updated February 22, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/keylogger Copy

Keystroke

Keystroke A keystroke is the single pressing or clicking of a key on a computer's keyboard. Keystrokes per minute (KSPM) is a common measurement of typing speed in addition to words per minute (WPM). Some applications and websites also use keystrokes as event triggers. Each keystroke on a keyboard sends a signal to the computer. This signal tells the computer which key the user pressed, how long they held it down, and whether it was pressed alone or with another key. Most keys on a keyboard will type a letter or symbol when clicked. Other keys, called modifier keys, do nothing when pressed on their own but trigger a keyboard shortcut when combined with another key. A computer's operating system receives the keystrokes, then either sends a command directly or passes the keystrokes to the application that currently has focus. Certain types of software, called keyloggers, keep records of every keystroke a user makes on their computer. Some keyloggers have legitimate purposes, like assisting in software troubleshooting or product development feedback, but viruses and malware also often include keyloggers. These keyloggers monitor a computer user's activity secretly to steal passwords and other personal information, then report every keystroke to a remote computer. Updated October 19, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/keystroke Copy

Keywords

Keywords Keywords are words or phrases that describe content. They can be used as metadata to describe images, text documents, database records, and Web pages. A user may "tag" pictures or text files with keywords that are relevant to their content. Later on, these files may be searched using keywords, which can make finding files much easier. For example, a photographer may use a program like Extensis Portfolio or Apple iPhoto to tag his nature photos with words such as "nature," "trees," "flowers," "landscape," etc. By tagging the photos, he can later locate all the pictures of flowers by simply searching for the "flowers" keyword. Keywords are used on the Web in two different ways: 1) as search terms for search engines, and 2) words that identify the content of the website. 1) Search Engine Search Terms Whenever you search for something using a search engine, you type keywords that tell the search engine what to search for. For example, if you are searching for used cars, you may enter "used cars" as your keywords. The search engine will then return Web pages with content relevant to your search terms. The more specific keywords you use, the more specific (and useful) the results will be. Therefore, if you are searching for a specific used car, you may enter something like "black Honda Accord used car" to get more accurate results. Many search engines also support boolean operators that can be used along with keywords to further refine the search. For example, you may search for "Apple AND computers NOT fruit" if you only want results related to Apple products and not the kind of apples that grow on trees. 2) Web Page Description Terms Keywords can also describe the content of a Web page using the keyword meta tag. This tag is placed in the section of a page's HTML and contains words that describe the content of the Web page. The purpose of the keywords meta tag is to help search engines identify and organize Web pages, like in the photos example above. However, because webmasters have been known to use inaccurate tags to get higher search engine ranking, many search engines now give little to no weight to the keywords meta tag when indexing pages. Updated October 26, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/keywords Copy

Kibibyte

Kibibyte A kibibyte (KiB) is a unit of data storage equal to 210 bytes, or 1,024 bytes. Like mebibyte (MiB) and gibibyte (GiB), kibibyte has a binary prefix to remove any ambiguity when compared to the multiple possible definitions of a kilobyte. A kibibyte is similar in size to a kilobyte, which is 103 bytes (1,000 bytes). 1,024 kibibytes make up a mebibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, a kilobyte could mean two different numbers depending on what was measured. Computer memory and formatted storage space used binary measurements, where one kilobyte was 1,024 bytes; disk capacity (as advertised by the manufacturer) used decimal measurements, where one kilobyte was 1,000 bytes. Since these two measurements were off by a mere 24 bytes, computer users of the day could understand the difference. Over time, storage capacities grew, and the difference between two measurements with the same names grew exponentially. Using specific binary prefixes like kibibyte and mebibyte to refer to those values helps to address the confusion. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 4, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/kibibyte Copy

Kilobit

Kilobit A kilobit is 103 or 1,000 bits. One kilobit (abbreviated "Kb") contains one thousand bits and there are 1,000 kilobits in a megabit. Kilobits (Kb) are smaller than than kilobytes (KB), since there are 8 kilobits in a single kilobyte. Therefore, a kilobit is one-eighth the size of a kilobyte. Before broadband Internet was common, Internet connection speeds were often measured in kilobits. For example, a 28.8K modem was able to receive data up to 28.8 kilobits per second (Kbps). A 56K modem could receive data up to 56 Kbps. Most ISPs now offer connection speeds of 10 Mbps and higher, so kilobits are not as commonly used. However, if you download a file from a server that does not have a lot of bandwidth, your system might receive less than 1 Mbps. In this case, the download speed may be displayed in kilobits per second. NOTE: While kilobits are used to measure data transfer rates, kilobytes are used to measure file size. Therefore, if you download an 800 KB file at 400 Kbps, it will take 16 seconds (rather than 2). This is because 400 kilobits per second is equal to 50 kilobytes per second. Updated October 3, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kilobit Copy

Kilobyte

Kilobyte A kilobyte is 103 or 1,000 bytes. The kilobyte (abbreviated "K" or "KB") is the smallest unit of measurement greater than a byte. It precedes the megabyte, which contains 1,000,000 bytes. While one kilobyte is technically 1,000 bytes, kilobytes are often used synonymously with kibibytes, which contain 1,024 bytes. Kilobytes are most often used to measure the size of small files. For example, a plain text document may contain 10 KB of data and therefore would have a file size of 10 kilobytes. Small website graphics are often between 5 KB and 100 KB in size. While some files contain less than 4 KB of data, modern file systems such as NTFS (Windows) and HFS+ (Mac) have a cluster size of 4 KB. Therefore, individual files typically take up a minimum of four kilobytes of disk space. NOTE: For a list of other units of measurement, view this Help Center article. Updated October 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kilobyte Copy

Kilohertz

Kilohertz One kilohertz (abbreviated "kHz") is equal to 1,000 hertz. Like hertz, kilohertz is used to measure frequency, or cycles per second. Since one hertz is one cycle per second, one kilohertz is equal to 1,000 cycles per second. Kilohertz is commonly used to measure the frequencies of sound waves, since the audible spectrum of sound frequencies is between 20 Hz and 20 kHz. For example, middle C (C4) on a piano keyboard produces a frequency of 261.63 Hz. The C key two octaves above middle C (C6) produces a frequency of just over 1 kHz (1,046.5 Hz). Since the sound frequency doubles with each octave, the C7 key produces an audible frequency of just over 2 kHz (2,093 Hz). As you might guess, frequencies above 2 kHz sound very high-pitched. Sound waves and low frequency radio waves are often measured in kilohertz. Other waves, such as high frequency radio waves, visible light waves, and ultraviolet rays, have much higher frequencies. Therefore, most waves in the electromagnetic spectrum are measured in megahertz, gigahertz, or even greater units of measurements. Abbreviation: kHz Updated March 3, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kilohertz Copy

Kindle

Kindle The Kindle is a portable e-reader developed by Amazon.com. It allows you to download and read digital books, newspapers, magazines, and other electronic publications. The Kindle also includes a built-in speaker and headphone jack for listening to audiobooks or background music. The first Kindle was released in November 2007 and several updated versions have been released since then. Each Kindle, except for the Kindle Fire, uses special type of display called "E Ink" or electronic paper. Unlike a typical laptop screen or computer monitor, the E Ink display is monochrome and has no backlight. Instead, it has a light-colored background and text and images are displayed in grayscale. The result is a paper-like display that can be easily viewed in bright sunlight. You can download content to a Kindle using the built-in Wi-Fi connection (available in Kindles released in 2010 or later) or Amazon's proprietary 3G Whispernet network. This Whispernet network is a free service provided by Amazon.com to Kindle users and does not require a wireless subscription. Amazon.com also provides a "Whispersync" that allows users to wirelessly sync data between multiple Kindle devices. While the Kindle was originally designed as a basic e-reader, each iteration has provided more functionality. For example, recent versions of the Kindle include a web browser, which allows you to view websites. The Kindle Fire, which was introduced in September 2011, is as much as tablet as an e-reader, since it has a color touchscreen and runs the Android operating system. Kindle Fire users can also download apps directly from the Amazon.com Appstore. NOTE: Kindle eBook files are saved with an .AZW file extension. Updated September 30, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kindle Copy

KOffice

KOffice KOffice (pronounced "K-office") was an integrated office suite for the K Desktop Environment (KDE), a desktop interface for Unix systems. It was first released in 2000 and the final version was published in 2012. The suite contained productivity applications, including a word processor, image editor, and project planning tool. Like KDE, both the KOffice source code and the office suite are available as free downloads. The appplications included with KOffice are listed below. Productivity KWord - a frame-based word processor KSpread - a spreadsheet program KPresenter - a presentation creation program Kexi - a database program Creativity Krita - a raster graphic editing program that supports layers Karbon14 - a vector graphics drawing application Kivio - a flowcharting program Management KPlato - a project management and planning program Supporting Programs KFormula - a mathematical formula editor KChart - a graphing and charting program Kugar - a business report generating application NOTE: KOffice was replaced by Calligra Suite, which was first released in 2012. Updated August 29, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/koffice Copy

KVM Switch

KVM Switch Stands for "Keyboard, Video, and Mouse switch." As the name implies, a KVM switch allows you to used multiple computers with the same keyboard, video display, and mouse. Now, most of us don't need to use two computers at once. In fact, using one computer at a time can sometimes be a challenge. However, there are situations where using a single keyboard, mouse, and display with multiple machines can be very practical. For example, software programmers may use a KVM switch to alternate between two or more computers with different operating systems. This allows them to test their software on multiple platforms when developing a crossplatform application. Network administrators often use KVM switches to monitor and control multiple servers at a time. These KVM switches may support eight or more computers at once. By simply pressing a button on the KVM switch, the administrator can view the display of any machine connected to the switch and control it with a single keyboard and mouse. Of course, KVM switches can also be used by the everyday home user. Some people may find it useful to have two computers at their desk, such as a home and work computer, or a Mac and a PC. In these situations, a KVM switch can accommodate both machines, allowing them to share the same monitor, keyboard, and mouse. Because only one of each is needed, the result is far less clutter on the desk. This leaves room for stacks of papers, mail, and other objects to clutter up the rest of the desk. Since most keyboards and mice use a USB connection, most KVM switches include USB ports. Older models may include PS/2 or serial ports. The connection for the monitor may be a VGA or DVI port, or both. If you plan on using a KVM switch for your computer setup, make sure the ports match the display and input devices you are going to use with it. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/kvmswitch Copy

Lag

Lag Lag is a slow response from a computer. It can describe any device that responds slower than expected, though it is most often used in online gaming. Video game lag is generally caused by one of two factors: A slow computer A slow Internet connection If you are playing an online multiplayer game and your computer cannot process the incoming data in realtime, it may slow the game down for everyone. If your Internet connection is slow or inconsistent (which often happens with shared wireless connections), your system may not send and receive data quickly enough to keep up with other players. The lag may result in choppy frame rates and cause a delay between your input and what happens on the screen. In an ideal world, all online gamers would have fast computers and Internet connections. In reality, players have a wide range of computer systems and Internet connection speeds. Therefore, game developers must account for lag in multiplayer games. One method is to use a central game server that makes sure one player's lag only affects the individual with the slow computer or Internet connection. It prevents players with high-quality gaming setups from being negatively affected by users with slow systems. In some cases, a player with a high lag will be booted from a multiplayer game. For example, in StarCraft 2, if a player's system is not responding, a message will appear on the player's screen that says, "Waiting for Server." Other players in the game will see a message that says "Waiting for player: [player name]." If the lag is too high, the server will eventually boot the lagging player from the game. Lag vs Latency The terms "lag" and "latency" are sometimes interchangeable since they often refer to the same thing. However, latency is a more technical term, usually quantified by a specific amount. For example, the latency of a player's network connection may be equal to the ping response time between his computer and the server. High latency (over 120 ms, for example) can produce noticeable lag. NOTE: "Lag" can also be used as a verb. For example, a player that is "lagging" may be slowing down the game. Updated August 18, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lag Copy

LAMP

LAMP Stands for "Linux, Apache, MySQL, and PHP." Together, these software technologies can be used to create a fully-functional web server. Linux is the most popular operating system used in web servers, primarily because many free Linux distributions are available. This means Linux-based servers are typically cheaper to set up and maintain than Windows servers. Since Linux is (open source|open source), it also works with many other popular open source web hosting software components. The most important software component in the "AMP" package is Apache, or "Apache HTTP Server." Apache is the software that serves webpages over the Internet via the HTTP protocol. Once Apache is installed, a standard Linux machine is transformed into a web server that can host live websites. Other components of LAMP include MySQL and PHP. MySQL is a popular open source database management system (DBMS) and PHP is a popular web scripting language. Together, these two technologies are used to create dynamic websites. Instead of only serving static HTML pages, a LAMP server can generate dynamic webpages that run PHP code and load data from a MySQL database. NOTE: In some instances, the "P" in LAMP may stand for either Perl or Python, which are other scripting languages. "AMP" packages for Windows and Mac systems are called WAMP and MAMP respectively. Updated May 23, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lamp Copy

LAN

LAN Stands for "Local Area Network" and is pronounced "lan." A LAN is a network of connected devices that exist within a specific location. LANs may be found in homes, offices, educational institutions, and other areas. A LAN may be wired, wireless, or a combination of the two. A standard wired LAN uses Ethernet to connect devices together. Wireless LANs are typically created using a Wi-Fi signal. If a router supports both Ethernet and Wi-Fi connections, it can be used to create a LAN with both wired and wireless devices. Types of LANs Most residential LANs use a single router to create the network and manage all the connected devices. The router acts as the central connection point and enables devices, such as computers, tablets, and smartphones to communicate with each other. Typically, the router is connected to a cable or DSL modem, which provides Internet access to connected devices. A computer may also be the central access point of a LAN. In this setup, the computer acts as a server, providing connected machines with access to files and programs located on the server. It also includes LAN software used to manage the network and connected devices. LAN servers are more common in business and educational networks, since the extra capabilities are not required by most home users. In a server-based LAN, devices may connect directly to the server or indirectly via a router or switch. NOTE: Multiple LANs may be combined to create a larger LAN. This type of network, which can be customized to include specific devices from various networks, is called a virtual LAN or VLAN. Updated December 29, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lan Copy

Landing Page

Landing Page A landing page is an informational or promotional webpage a user "lands on" after clicking a link. Most online advertisements link to a landing page that corresponds with the content of the ad. Landing pages often have a simple design and a primary call to action, such as "Learn More," "Download Now," or "Buy Today." The goal is to minimize distractions and guide the user to take a specific step. Some landing pages use the associated website template, while others may have a completely different design. They may be integrated into a website or published as "standalone" pages that do not link back to the primary site. Examples of landing page entry points include: A sponsored search result A link in a marketing email A link in a YouTube video A button in an app, such as "Upgrade Now" A promotional URL in a physical mailing Landing pages commonly include analytics code for tracking and marketing purposes. The analytics data may include things like source/referrer, IP address, platform, and link clicks. These metrics help marketers determine how effective each landing page is and can be used for remarketing purposes, such as customizing ads for repeat visitors. Updated April 8, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/landing_page Copy

Laptop

Laptop A laptop, also known as a notebook, is a portable computer with a built-in LCD screen, keyboard, and trackpad. A laptop's screen is on a hinge, which allows it to open up when in use and to close like a book to keep it safe when stowed away. A laptop computer also includes a battery that allows it to run anywhere, regardless of whether or not there's a power outlet nearby. Built-in Wi-Fi networking, and in some cases LTE or 5G cellular connectivity, allows a laptop to connect to and browse the Internet without any wires at all. Many laptop computers use mobile processors and chipsets designed for the needs of mobile computers — primarily, low power usage. Low-power CPUs and GPUs help the battery last longer and produce less heat, making them more comfortable to use. However, even though laptops are far more compact than desktop computers, they are not necessarily less capable. High-end laptop processors perform just as well as desktop processors, allowing professionals to work effectively from any location. They typically include several I/O ports, including multiple USB or Thunderbolt ports, and sometimes an HDMI port. Laptops can also connect to external displays and use USB and Thunderbolt hubs to expand connectivity, allowing one to serve as a desktop replacement that can travel when necessary. An ASUS laptop computer Some laptops also include tablet-style touchscreen displays, providing an additional way to interact with it. Some laptops, known as convertible laptops, blur the line between tablet and laptop even further by allowing you to open the screen all the way around to the back. This turns the laptop into a tablet, albeit one with a keyboard and trackpad on the back. While operating in tablet mode, the laptop turns off the keyboard and trackpad and relies entirely on the touchscreen for input. Every component of a laptop computer is tightly integrated to make it as compact as possible. While this helps with portability, it also means that laptops are much more complicated to repair since components are typically soldered directly onto the motherboard. This integration also prevents a laptop from being upgraded later, since you cannot swap out a laptop's RAM, graphics card, or SSD with a newer part. For that reason, it's wise to thoroughly consider your needs before purchasing a laptop since you won't have the chance to upgrade it later. Updated July 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/laptop Copy

Laser Printer

Laser Printer A laser printer is a type of printer that uses a laser beam to apply powdered ink, called toner, to a sheet of paper. A laser printer can print text and shapes at a high resolution, typically 600 dots per inch (dpi) or more. It can also print documents at significantly higher speeds than an inkjet printer, with most models able to print more than 30 pages per minute for simple monochrome documents. However, a laser printer cannot print high-resolution full-color images at the same quality as an inkjet printer, so the best choice depends on the content of the print job. Laser printers do not use the laser to burn text and other shapes into the paper. Instead, a printer fires a laser at the surface of a rotating cylindrical drum to "draw" the image it is printing. The laser hits the drum's photoreceptive coating and creates a negatively-charged area for positively-charged toner to cling to as the drum rotates past the cartridge. When a sheet of paper moves through the printer, the toner sticks to the paper instead (as it has a slightly stronger charge than the drum). After the paper proceeds past the drum, it passes through rollers that heat the toner and fuse it to the paper. A Brother HL-L2300D laser printer Since the toner is fused to the paper before it leaves the printer, it won't smear like a fresh page from an inkjet printer. Most laser printers only use black toner, suitable for printing pages of text and other monochrome content. Color laser printers use four colors of toner (cyan, magenta, yellow, and black) and are more expensive, as they are more technically complex and require a separate drum for each color. While a replacement toner cartridge often costs more than an inkjet ink cartridge, a toner cartridge contains enough toner to print thousands of pages, and an ink cartridge may only last a hundred. Updated December 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/laserprinter Copy

Latency

Latency In computing, "latency" describes some type of delay. It typically refers to delays in transmitting or processing data, which can be caused by a wide variety of reasons. Two examples of latency are network latency and disk latency, which are explained below. 1. Network latency Network latency describes a delay that takes place during communication over a network (including the Internet). For example, a slow router may cause a delay of a few milliseconds when one system on a LAN tries to connect to another through the router. A more noticeable delay may happen if two computers from different continents are communicating over the Internet. There may be a delay in simply establishing the connection because of the distance and number of "hops" involved in making the connection. The "ping" response time is a good indicator of the latency in this situation. 2. Disk latency Disk latency is the delay between the time data is requested from a storage device and when the data starts being returned. Factors that affect disk latency include the rotational latency (of a hard drive) and the seek time. A hard drive with a rotational speed of 5400 RPM, for example, will have almost twice the rotational latency of a drive that rotates at 10,000 RPM. The seek time, which involves the physical movement of the drive head to read or write data, can also increase latency. Disk latency is why reading or writing large numbers of files is typically much slower than reading or writing a single contiguous file. Since SSDs do not rotate like traditional HDDs, they have much lower latency. Other types of latency Many other types of latency exist, such as RAM latency (a.k.a. "CAS latency"), CPU latency, audio latency, and video latency. The common thread between all of these is some type of bottleneck that results in a delay. In the computing world, these delays are usually only a few milliseconds, but they can add up to create noticeable slowdowns in performance. NOTE: It is important to not confuse latency with other measurements like data transfer rate or bandwidth. Latency refers to the delay before the data transfer starts rather than the speed of the transfer itself. Updated March 3, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/latency Copy

Launcher

Launcher A launcher is an application that organizes other applications into a single place for easy access. The term "launcher" is often used to refer specifically to launcher apps on Android devices that replace the default home screen. It may also refer to game launchers that help gamers buy, download, install and launch games. Android Launchers Launchers are a class of application that replaces an Android smartphone's default home screen. Each Android device includes a built-in launcher app, but you can download and install a third-party launcher. Changing an Android device's launcher is a way of customizing its appearance and functionality beyond what is offered by default but without the extra hassle of rooting the device. Third-party launchers offer new ways to customize the home screen. Launchers may add new widgets that provide information directly on the home screen or new gestures that provide quick shortcuts to specific apps. They may also redesign the app drawer to present your installed apps in a new way. Game Launchers Game launcher applications organize computer games into a single library, helping you browse and find the one you want to play. Most popular game launchers are also full game storefronts; you can browse available games, purchase one, download it, install it, then launch it without leaving the launcher. Many game launchers even include community features like built-in messaging and achievements. Steam is a popular game launcher and storefront for Windows, macOS, and Linux Game launchers are often made by game publishing companies like Epic Games and Ubisoft to distribute their games. Some games are exclusive to specific launchers, although most are eventually available across multiple launchers and storefronts. If you have more than one game launcher installed, it is important to remember which launcher you used to buy a game — it is usually not possible to transfer a game from one to another. Updated April 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/launcher Copy

Lazy Loading

Lazy Loading Lazy loading is a programming technique that delays loading resources until they are needed. A common example is a webpage that defers loading images until the user scrolls to their location within the page. Lazy loading is used on the web and in software programs, such as mobile and desktop applications. Lazy Loading on the Web Lazy loading images within a webpage can speed up the load time since the browser does not need to load images that are not visible. As the user scrolls through the page, the images are loaded dynamically. This is accomplished using JavaScript that detects the position of each image and determines if it is in the browser window's viewable area. If the user scrolls down to an image, the JavaScript will request the resource from the web server and display the image on the page. If the user does not scroll down, the image will not load. It is possible to delay the loading of other resources, such as JavaScript files, CSS, and even the HTML itself. For example, a web developer might determine which CSS styles are needed for "above-the-fold" content on a webpage, or content viewable within the height of a typical browser window. The developer can implement these as "inline styles," or styles defined within the HTML of the webpage. JavaScript is used to load additional CSS after the page has loaded or once the user starts scrolling. Lazy loading video is also popular on the web. It is especially effective since video files are typically the largest resources loaded within a webpage. Instead of sending the entire video to a client's device, the web server only sends small portions of the video while the user is watching it. Popular video sharing websites like YouTube and Vimeo use lazy loading to reduce bandwidth and to prevent users from downloading more video content than necessary. This is especially helpful for users with metered Internet connections, such as mobile data plans. When lazy loading a video, it is common to load a few seconds or even several minutes ahead of the current point in the video. The video data is saved in a buffer, which helps videos play smoothly even when the Internet connection is not consistent. Lazy Loading in Software Programs While lazy loading has become increasingly popular on the web, it has been used in software development for a long time. For example, an operating system may only display thumbnail images for the visible icons in a folder. Similarly, an image viewing program may only load the visible images in a photo library. This uses less memory and improves application performance because the program does not load unnecessary data. Updated January 28, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lazy_loading Copy

LCD

LCD Stands for "Liquid Crystal Display." LCD is a flat panel display technology commonly used in TVs and computer monitors. It is also used in screens for mobile devices, such as laptops, tablets, and smartphones. LCD displays don't just look different than bulky CRT monitors, the way they operate is significantly different as well. Instead of firing electrons at a glass screen, an LCD has a backlight that provides light to individual pixels arranged in a rectangular grid. Each pixel has a red, green, and blue RGB sub-pixel that can be turned on or off. When all of a pixel's sub-pixels are turned off, it appears black. When all the sub-pixels are turned on 100%, it appears white. By adjusting the individual levels of red, green, and blue light, millions of color combinations are possible. How an LCD works The backlight in a liquid crystal display provides an even light source behind the screen. This light is polarized, meaning only half of the light shines through to the liquid crystal layer. The liquid crystals are made up of a part solid, part liquid substance that can be "twisted" by applying electrical voltage to them. They block the polarized light when they are off, but reflect red, green, or blue light when activated. Each LCD screen contains a matrix of pixels that display the image on the screen. Early LCDs had passive-matrix screens, which controlled individual pixels by sending a charge to their row and column. Since a limited number of electrical charges could be sent each second, passive-matrix screens were known for appearing blurry when images moved quickly on the screen. Modern LCDs typically use active-matrix technology, which contain thin-film transistors, or TFTs. These transistors include capacitors that enable individual pixels to "actively" retain their charge. Therefore, active-matrix LCDs are more efficient and appear more responsive than passive-matrix displays. NOTE: An LCD's backlight may either be a traditional bulb or LED light. An "LED display" is simply an LCD screen with an LED backlight. This is different than an OLED display, which lights up individual LEDs for each pixel. While the liquid crystals block most of an LCD's backlight when they are off, some of the light may still shine through (which might be noticeable in a dark room). Therefore OLEDs typically have darker black levels than LCDs. Updated June 8, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lcd Copy

LDAP

LDAP Stands for "Lightweight Directory Access Protocol." LDAP is a protocol for accessing directory information services over a network. LDAP directory servers store information about user accounts, network-accessible resources, and other organizational data in a directory database optimized for fast information retrieval. LDAP is often used for authentication and permissions management in an organization, using tools like Microsoft Active Directory or IBM Directory Services. It functions over TCP/IP, which allows it to provide directory information over local networks and the Internet. An LDAP directory server is like an organization's phone book, containing information that helps identify users and locate resources within the directory's hierarchy. Each directory is organized into several levels of a tree, starting at the root directory and split into branches — geographic locations, organizations, and organizational units like divisions and departments. Individual users within those divisions are further organized into groups and then assigned various attributes like user ID, email address, and permissions levels. A simplified diagram of an LDAP directory tree The most common reason for an application to communicate with a directory server using LDAP is for user authentication. For example, when a user wants to sign into a web app, that app makes an LDAP query that checks the provided username and password against what is in the directory and grants access if they match. Communication applications (like email clients, voice and video conferencing apps, and other collaboration tools) also use LDAP to provide an address book that allows one user to look up any other user in the directory system. Updated August 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ldap Copy

Lead

Lead Lead (pronounced "leed") is a marketing term that describes a connection made with a potential customer or client. The goal of most online advertising is to generate leads, which may "lead" to sales or subscriptions for a company or organization. The process of creating leads on the Internet is called online lead generation. This can be accomplished by running ads through a service like Google AdWords or Bing Ads or by directly purchasing advertising space on other websites. Advertisers can choose to run either CPC (cost per click) or CPA (cost per action) ads, which can both generate leads. While CPA and CPL (cost per lead) are often used interchangeably, CPL specifically measures lead generations, rather than aggregate clicks or sales. A typical online lead is created when a user fills out and submits a form on a website. The number of fields required depends on the type of lead. For example, an automotive dealer website may provide an online form for interested buyers. The form may request your name, email address, physical address, phone number, and best time to reach you. Other forms are more basic and only capture an email address. These types of leads are often created when you sign up for a newsletter or download a software program. Whatever information is captured in a lead can be used to contact you at a later time for marketing purposes. Updated October 18, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lead Copy

Leaderboard

Leaderboard A leaderboard ad is a wide advertisement that appears at the top of a web page before any of the page's content. It is one of the standard sizes used across all Internet advertising platforms. It has a standard size of 728 pixels wide by 90 pixels tall (728x90). Leaderboard ads can be either text-based or image-based, with images being more popular. They may even include animations. Clicking a leaderboard ad directs the visitor to a landing page set by the advertiser. Website owners may choose to place a leaderboard ad above a page's header, or below their header but above the rest of their page's contents. Ads positioned at the top of a page have a higher click-through rate and are most valuable for advertisers. While positions at the top of the page are the most common, some website owners place additional leaderboard-sized ads in the middle of a page's contents or at the footer. The leaderboard ad size replaced the previous standard, the 468x60 banner ad. Leaderboard ads are one of the most popular sizes of online advertisements, second only to the 300x250 Mid-Page Unit (MPU) or medium rectangle. They are common in both click-based and impression-based advertising campaigns. Updated November 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/leaderboard Copy

Leading

Leading Leading is a typography term that describes the distance between each line of text. It is pronounced ledding (like "sledding" without the "s"). The name comes from a time when typesetting was done by hand and pieces of lead were used to separate the lines. Today, leading is often used synonymously with "line height" or "line spacing." Sometimes the terms have the same meaning, while in some applications, the implementation is different. Font leading is a setting commonly found in graphic design programs, while word processors typically use line spacing. Leading is typically measured in pixels, while line spacing is measured as a ratio of the default line height. In manual typesetting, leading defined the distance between each line (the width of the lead). However, modern applications often calculate leading as the font size plus the space above a line of text. In Photoshop, for example, the default leading or "Auto" setting for a 40px font is roughly 50px (125% of 40px). The extra ten pixels provides decent padding between each row of text, which makes it more readable. If the default leading is too much or too little, it be modified using the Leading setting in Photoshop's Character palette. Most text editors use a default line spacing of 120 to 130% of the font size. A line spacing setting or 1.2, for example, is common in many word processors. However, this can be adjusted to another setting, such as 1.5 or 2.0, to add extra spacing between the lines. If leading or line spacing is reduced below the default setting (such as a leading of 20px for a 40px font), the lines may overlap. NOTE: Leading is generally calculated from the baseline of a font (the bottom of a lowercase "a") to the baseline of the row above it. However some programs add more vertical padding to each line than others. Therefore, different programs may produce different line heights with the same leading size. Updated October 11, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/leading Copy

Leaf

Leaf A "leaf" in computing refers to a file within a hierarchical directory structure. Files are akin to leaves on a tree. Just as a tree's branches extend outwards, dividing into smaller branches, a file system organizes its data in a similar fashion. In the tree structure: The lowest-level directory (also called the root directory) is the trunk of the tree. Folders (or directories) act as the branches, branching into subdirectories. Files are the leaves, which serve as the endpoints of each branch. A branch (folder) can have multiple leaves. Technically, a leaf node in a tree structure is defined as a node that does not have child nodes. Files fit this definition, though some files are "containers" that store multiple files. Tree structure vs root structure The directory structure of a storage device may also be pictured as a tree's root system, where the root folder is the top-level directory. In this model, the tree structure is flipped upside down with the root directory on top and files and folders below it. Updated December 9, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/leaf Copy

LED

LED Stands for "Light-Emitting Diode." An LED is an electronic device that emits light when an electrical current is passed through it. Early LEDs produced only red light, but modern LEDs can produce several different colors, including red, green, and blue (RGB) light. Recent advances in LED technology have made it possible for LEDs to produce white light as well. LEDs are commonly used for indicator lights (such as power on/off lights) on electronic devices. They also have several other applications, including electronic signs, clock displays, and flashlights. Since LEDs are energy efficient and have a long lifespan (often more than 100,000 hours), they have begun to replace traditional light bulbs in several areas. Some examples include street lights, the red lights on cars, and various types of decorative lighting. You can typically identify LEDs by a series of small lights that make up a larger display. For example, if you look closely at a street light, you can tell it is an LED light if each circle is comprised of a series of dots. The energy efficient nature of LEDs allows them to produce brighter light than other types of bulbs while using less energy. For this reason, traditional flat screen LCD displays have started to be replaced by LED displays, which use LEDs for the backlight. LED TVs and computer monitors are typically brighter and thinner than their LCD counterparts. Updated July 31, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/led Copy

Leet

Leet Leet, or leetspeak, is a method of typing words using alternate characters. Letters are replaced with numbers or symbols that closely resemble them. For example, the letter "a" might be replaced with the @ symbol and the letter "E" might be replaced with the number 3. The word "leet" can be written as "1337." Some letters have obvious numeric alternatives, such as "8" for "B" and "5" for "S." Other characters are easily replaced by special characters. For example, "ƒ" can be used for "f" and "µ" may be used for "u." In cases where no obvious substitution is available, multiple characters may be used. For example, "/" can replace "V" and "|)" can replace "D." Most letters have more than one possible leet replacement, so there is no standard way to write words in leetspeak. Additionally, font case is interchangeable. Leet words often contain a combination of lowercase and uppercase characters. In some cases, letters may be omitted or changed to something similar, such as "z" for the letter "s." The word "computer" can be written in leetspeak several different ways. Below are a few examples: CøM9µ†3r c0m¶uT€R ¢o^^püt&r çOm|°|_|7e® Ç[]//pÜ+é/I2 While there is no correct way to write words in leetspeak, they should be easy to decipher. Leet has no official use, though it is commonly associated with the computing elite (3l33t) and hackers (h@ck0rz). It is often used just for fun, though it can also be used to intimidate newbies (n00bs) in web forums or online chatrooms. Some users type questionable words in leetspeak to bypass word filters in online games or discussions. NOTE: You can view a list of common leet translations from the PC.net Leet Sheet. Updated January 12, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/leet Copy

Left-Click

Left-Click A left-click involves clicking the left mouse button. Typically, "left-clicking" means the same thing as just "clicking" since the left mouse button is the primary button by default. The term "left-click" is most often used in contrast to "right-click," which involves clicking the secondary button on the right side of the mouse. Left-clicking is used for many common computer tasks, such as selecting objects, opening hyperlinks, and closing windows. Clicking and holding the left mouse button can be used to select text or perform drag and drop operations. In video games, left-clicking is typically used to perform the primary action, such as moving a character or firing a weapon. For right-handed users, the left-click is the most natural click, since it is done with the index finger. For people that use a mouse with their left hand, it is a bit less natural since left-clicking is done with the middle finger and the index finger is used for right-clicking. Therefore, Mac OS X and Windows allow you to switch the primary mouse button from left to right. NOTE: Double-clicking is simply two left-clicks in rapid sequence. Updated April 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/left-click Copy

Leopard

Leopard Leopard is another name for Mac OS X 10.5, which was released on October 26, 2007. It followed Mac OS X 10.4 Tiger and preceded the release of Mac OS X 10.6 Snow Leopard. Mac OS X Leopard was one of the most significant updates to Mac OS X, with over 300 new features. Some of the most notable additions include Time Machine (an automated backup solution), Spaces (a virtual desktop environment), and Quick Look (a feature that allows many file types to be viewed directly in the Finder by pressing space bar. Leopard was also the first version of Mac OS X to include Boot Camp, a feature that allows you to run Windows on your Mac. Besides the above features, Leopard also provided several enhancements to existing Mac OS X software. For example, RSS feed support was added to Mail, advanced photo filters were added to Photo Booth, and the Spotlight search feature added support for boolean operators. Leopard also shipped with Safari 3, the third major release of Apple's web browser. The final Leopard software update was 10.5.8, released on August 5, 2009. Updated October 24, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/leopard Copy

LFN

LFN Stands for "Long Filename." LFN is an extension to the "short filename" standard used in DOS, which only allowed eight uppercase characters plus a three-character file extension. LFN filenames can be up to 255 characters long, including the file extension, which may be longer or shorter than three characters. They can include lowercase and uppercase characters, as well as spaces, numbers, and symbols. Microsoft introduced long filename support with the VFAT file system in 1994. VFAT was first included with Windows NT 3.5 and later in Windows 95. To maintain backward compatibility with the previous FAT file system, Microsoft provided an automatic way to shorten long filenames using a tilde followed by a number. If the shortened filename conflicted with another filename, the number would be incremented by one. For example: EXAMPLE.DOC → EXAMPLE.DOC Example.doc → EXAMPLE.DOC Sample File.doc → SAMPLE~1.DOC Sample File copy.doc → SAMPLE~2.DOC Verylongfilename.txt → VERYLO~1.TXT NOTE: While Windows, macOS, and Unix have supported long filenames for over two decades, a filename may still be truncated if the file is copied or edited by a Windows system using the FAT file system. Updated October 22, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lfn Copy

LGA

LGA Stands for "Land Grid Array." LGA is a type of processor package that places the connection pins in the socket on the motherboard rather than on the integrated circuit itself. The pins in the socket connect to flat contacts, or "lands," found on the bottom of the processor chip. LGA processor packages have several advantages over pin grid array (PGA) packages. First, chips using LGA can have a much higher density of contacts than chips using PGA. The higher density allows a processor socket to take up less space on a motherboard or fit more contacts into the same area. The chips themselves are also more durable and resistant to damage since the fragile pins are instead in the socket. Finally, LGA packaging allows a processor to be either used in a socket or soldered directly to a motherboard. There are many configurations of LGA sockets used by processor manufacturers, each in use for a few years before processor architecture changes necessitate a redesigned socket. Both Intel and AMD use varieties of LGA sockets for the CPU chips. Intel names its LGA sockets for the number of contact pins used. The first Intel LGA socket was called LGA775 (since it had 775 pins), followed by LGA1156 (which had 1,156 pins). AMD uses a sequential naming scheme that continues the pattern from their PGA sockets, first utilizing an LGA package on their TR4 and AM5 sockets. Updated October 18, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/lga Copy

Library

Library The word "library," in the context of computer software, has several meanings. It may refer to a collection of pre-written source code and other resources a program can reference. A library may also describe an organized database of files a program manages, like a music or photo library. Code and Resource Libraries Operating systems include library bundles that contain source code functions and other resources. Unix, Linux, and macOS use library folders to store scripts, fonts, and other system resources. Windows uses DLL files located in various system folders. Any program running on a computer has access to these libraries and can reference their contents during runtime instead of bundling the information in their executable files. Resources in the macOS Library folder that any app can access Code libraries are available to developers separately for inclusion in their programs. Developers may use these specialized libraries to add preexisting processes to their programs. For example, the OpenSSL C++ library contains functions for encryption, cryptography, and SSL security. A developer can include OpenSSL in a project and use the prewritten functions instead of writing them from scratch, saving significant time and effort. Media and File Libraries A library may also refer to a collection of files a program can organize and access. For example, an Apple Music library manages music files — organizing them into folders, and providing quick access to any file within the app. When you add new songs, Apple Music automatically adds the files (and the corresponding metadata) to the library. Other types of media can be organized into libraries as well. Photo libraries may contain custom albums and can automatically sort photos by date, location, or even by person, using facial recognition. Smartphones automatically store photos taken in a library, and photo library apps like Apple Photos also allow you to import photos from other cameras. Video apps like Plex can create libraries of movies and TV shows, letting you easily browse your collection from the app's interface. Other media apps like podcast players and eBook readers provide similar libraries that allow you download and and import files. Updated June 16, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/library Copy

LiDAR

LiDAR Stands for "Light Detection and Ranging." LiDAR is a distance detection technology that uses pulses of light to detect surrounding objects. A LiDAR scanner determines the distance of each object by calculating the time each pulse takes to reflect back to the source. Using multiple light pulses, it can generate a 3D depth map of the scanned area. LiDAR was originally used for terrain mapping purposes, such as creating topographical images of the earth's surface. High-powered LiDAR units on drones and other aircraft can scan large areas at once. Combining LiDAR and GPS data makes it possible to generate three-dimensional maps of scanned locations. In recent years, LiDAR has made its way to everyday devices, including automobiles and smartphones. Automotive LiDAR Modern cars and trucks use LiDAR to detect surrounding vehicles and other objects. Short-range LiDAR (under 50 meters) provides features like blind spot detection and park assist. Long-range LiDAR (up to 200 meters) enables features like adaptive cruise control and automatic emergency braking. LiDAR scanners do not capture color data. Therefore, autonomous vehicles combine LiDAR and video camera data to detect and identify surrounding objects. Smartphone LiDAR Modern high-end smartphones, such as the iPhone 12 Pro and later, include short-range LiDAR. The depth-sensing technology improves photo quality and provides augmented reality (AR) features. For example, when taking a photo, LiDAR helps the phone detect and focus on the primary objects in the frame. When using AR technology, such as Snapchat filters, LiDAR improves face detection, providing smoother and more accurate mapping. NOTE: While Apple's FaceID employs 3D imaging to detect faces, it uses infrared technology rather than LiDAR. Updated May 5, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lidar Copy

LIFO

LIFO Stands for "Last In, First Out." LIFO is a method of processing data in which the last items entered are the first to be removed. This is the opposite of LIFO is FIFO (First In, First Out), in which items are removed in the order they have been entered. To better understand LIFO, imagine stacking a deck of cards by placing one card on top of the other, starting from the bottom. Once the deck has been fully stacked, you begin to remove the cards, starting from the top. This process is an example of the LIFO method, because the last cards to be placed on the deck are the first ones to be removed. The LIFO method is sometimes used by computers when extracting data from an array or data buffer. When a program needs to access the most recent information entered, it will use the LIFO method. When information needs to be retrieved in the order it was entered, the FIFO method is used. Updated February 23, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lifo Copy

Lightning

Lightning Lightning is a proprietary I/O interface designed by Apple for its mobile devices, such as the iPhone, iPad, and iPod. It was first introduced in September, 2012, with the iPhone 5 and new iPod models. It was later added to iPads, beginning with the 4th generation iPad and the first generation iPad mini. The Lightning interface replaced the the previous "dock connector," which Apple products used since 2003. The Lightning connector has eight pins and is about one third the size of the 30-pin dock connector it supersedes. Instead of having mobile latches on the sides, the Lightning connector has small divots on each side that allow it to snap into place. Even without magnets or clips, the Lightning connection is designed to be strong enough to hold a device upside down by the cable without the cable detaching. Unlike most other I/O interfaces, such as USB, Firewire, and the previous dock connector, the Lightning connector is reversible. The connection is fully symmetrical and the connector can be inserted either way into a Lightning port. This means it is impossible to insert the cable upside down, which makes it easier to plug in and reduces wear and tear on the interface. The order of the pins are recognized dynamically by the device when the connection is made, allowing power and data to flow through the correct channels. The name "Lightning" is correlated with Thunderbolt, another I/O port Apple began using around the same time the Lightning interface was introduced. However, unlike Thunderbolt, Lightning is proprietary and is only used in Apple products. While most mobile devices (such as Android phones) have standard mini-USB or micro-USB ports, if you have an Apple device, you will need a Lightning cable to charge it and transfer data. Updated February 5, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lightning Copy

Link

Link A link (short for hyperlink) is an HTML object that allows you to jump to a new location when you click or tap it. Links are found on almost every webpage and provide a simple means of navigating between pages on the web. Links can be attached to text, images, or other HTML elements. Most text links are blue since that is the standard color web browsers use to display links. However, links can be any color since the style of the link text may be customized using HTML or CSS styles. In the early days of the web, links were underlined by default. Today, underlining links is less common. When a link is applied to an image, the link tag encapsulates, or surrounds the image tag. Since the image tag is nested inside the link tag, the image itself becomes a link. This method can be used to apply links to other elements such as and objects. However, since CSS can be used to stylize a link, an tag with a CSS class or ID attribute is often used in place of a or tag. Below is an example of the HTML for a text and image link: Text Link: Computer Definition Image Link: Relative and Absolute Links The first link above is a "relative link" because it does not include the domain name. Instead, the link is relative to the current website. Any internal link on TechTerms.com, for example, does not need "https://techterms.com/" in the source. Rather, a relative link like "/definition/computer" is all that is required. Since the link starts with a forward slash, the path begins with the root directory. If a relative link does not start with a forward slash, the path is relative to the current URL. The second link above is an absolute link because it includes the domain name. Absolute links are required for external links, which direct you to another website. They may begin with "http" or "https." Absolute links may also begin with two forward slashes ("//"). This is interpreted as "http://" for pages served via HTTP and "https://" for pages served via HTTPS. NOTE: The "a" in the tag stands for "anchor," since early hypertext documents often linked to anchors (or markers) within a page rather than other pages. The "href" within an stands for "hypertext reference." Updated June 13, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/link Copy

LinkedIn

LinkedIn LinkedIn is a social networking website designed for business professionals. It allows you to share work-related information with other users and keep an online list of professional contacts. Like Facebook and MySpace, LinkedIn allows you to create a custom profile. However, profiles created within LinkedIn are business-oriented rather than personal. For example, a LinkedIn profile highlights education and past work experience, which makes it appear similar to a resume. Profiles also list your connections to other LinkedIn users, as well as recommendations you make or receive from other users. By using LinkedIn, you can keep in touch with past and current colleagues, which can be useful in today's ever-changing work environment. You can also connect with new people when looking for potential business partners. While people outside your personal network cannot view your full profile, they can still view a snapshot of your education and work experience. They can also contact you using LinkedIn's anonymous "InMail" messaging service, which could lead to new job opportunities. LinkedIn has several benefits for business professionals, which is why it is used by millions of people across the world. Just remember, if you decide to create a LinkedIn profile, keep your information professional. It's best to save your personal information for the other social networking websites. Updated March 3, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/linkedin Copy

Linux

Linux Linux (pronounced "lih-nux", not "lie-nux") is a Unix-like operating system (OS) created by Linus Torvalds. He developed Linux because he wasn't happy with the currently available options in Unix and felt he could improve it. So he did what anybody else would do and created his own operating system. When Linus finished building a working version of Linux, he freely distributed the OS, which helped it gain popularity. Today, Linux is used by millions of people around the world. Many computer hobbyists (a.k.a. nerds) like the operating system because it is highly customizable. Programmers can even modify the source code and create their own unique version of the Linux operating system. Web hosting companies often install Linux on their Web servers because Linux-based servers are cheaper to set up and maintain than Windows-based servers. Since the Linux OS is freely distributed, there are no licensing fees. This means Linux servers can host hundreds or even thousands of websites at no additional cost. Windows servers, on the other hand, often require user licenses for each website hosted on the server. Linux is available in several distributions. Some of the most popular distributions include Red Hat Enterprise, CentOS, Debian, openSUSE, and Ubuntu. Linux also supports several hardware platforms, including Intel, PowerPC, DEC Alpha, Sun Sparc, and Motorola. Since Linux is compatible with so many types of hardware, variations of the Linux operating system are used for several other electronic devices besides computers. Some examples include cell phones, cable boxes, and Sony's PS2 and PS3 gaming consoles. Updated February 11, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/linux Copy

Lion

Lion Lion is another name for Mac OS X 10.7, the eighth version of Apple's desktop operating system. It was released on July 20, 2011, almost two years after Mac OS X 10.6 Snow Leopard and preceded the release of Mac OS X 10.8 Mountain Lion. Lion was the first version of Mac OS X to be released on the Mac App Store and was not sold on a DVD like previous versions of the operating system. Mac OS X 10.7 Lion was one of the most substantial updates to Mac OS X and included more than 250 new features. Most notably, Lion added several features from iOS, Apple's mobile operating system that runs on the iPhone and iPad. For example, Lion supported several new multi-touch gestures, such as swiping to switch between applications and to navigate between webpages. Additionally, the user interface was updated to look more like the iOS, with hidden scroll bars and a bouncing animation when you scroll past the top or bottom of a page. In an effort to make Mac OS X 10.7 more like iOS, Apple also introduced system-wide support for full-screen apps, which hides the menu bar and allows the current application to fill the entire screen. Several of Apple's own applications including Safari, Mail, iPhoto, and Pages, were updated with full-screen support when Lion was released and many third party applications were updated shortly after. Lion also introduced "Launchpad" (similar to the Launcher included with early versions of the Mac OS), which provides one-click access to all programs installed in the Applications folder. Finally, Lion added a "Resume" feature that saves the state of open windows in applications, so they reappear automatically when you reopen a program or restart your computer. The last Lion software update was 10.7.5, released on October 4, 2012. Updated October 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lion Copy

LISTSERV

LISTSERV LISTSERV is an application that creates, automates, and manages email mailing lists. It was first developed and released in 1986 by Eric Thomas to improve on earlier mailing list software by automating membership; previously, administrators needed to manage mailing list membership manually, which was a time-consuming process that limited the scale of a mailing list. Soon after its release, LISTSERV became the standard mailing list tool for BITNET, a network of universities. Mailing lists using LISTSERV work over standard email protocols, allowing subscribers to receive messages in their inbox instead of through separate software. It allows you to easily subscribe and unsubscribe to the list through a web form or by emailing the LISTSERV email address. Once you've subscribed, you can send messages to everyone by emailing the LISTSERV. Messages and replies sent by other members are forwarded by the LISTSERV to your address. The mailing list's subscriptions and message distribution are all automated by the LISTSERV program. LISTSERV provided an efficient medium for group discussions online years before the advent of social media. Unlike other mailing list programs focused on sending newsletters and marketing messages one way, LISTSERV allows every subscriber to participate in a back-and-forth conversation. The LISTSERV program has been under active development by L-Soft since it transitioned to a commercial product in 1994, adding modern features like HTML email formatting and email marketing tools. NOTE: "LISTSERV" is sometimes used synonymously with "mailing list," but they are not the same thing. LISTSERV is a single program that provides automated mailing list functionality. Other mailing list programs and services besides LISTSERV are also available. Updated September 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/listserv Copy

Live Streaming

Live Streaming Live streaming is a type of streaming in which audio or video is broadcast live over the Internet. The media is transmitted while it is recorded, allowing viewers to watch or listen to it in real-time. Most live streams are delivered via multicasting. Unlike a television broadcast, a multicast only transmits the media to users who choose to watch or listen to the stream. Multiple users can "tune in" to a single stream, providing an efficient way of delivering the audio or video to several locations at once. Live Streaming Platforms Twitch is a popular live streaming platform, which is mainly used by gamers to stream their games live over the Internet. It is also to broadcast live eSports events. Personal streaming platforms, such as Periscope and Facebook Live allow users to live stream video from their smartphones. YouTube Live allows YouTube users to share videos in real-time via live streaming. Streaming vs Live Streaming The terms "streaming" and "live streaming" are often used interchangeably, but they have different meanings. Streaming is delivering media over the Internet that can be played while it is downloaded. Live streaming is a specific type of streaming that is broadcast at the same time it is recorded. An easy way to check if you are watching a live stream is to see if the video has a defined length. If there is no end to the video and you can't jump ahead in the video, you are most likely watching a live stream. Most live streams also display a "Live" indicator somewhere in the interface. However, previously-recorded videos may have a "Live" icon overlaid on the video, which can be misleading. NOTE: Most live streaming platforms provide a live chat interface alongside the video. This feature allows users watching the video to comment or react to the video in real-time. Updated August 17, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/live_streaming Copy

Load Balancing

Load Balancing Load balancing is the practice of distributing a workload across multiple computers. It is typically performed to spread network traffic evenly between a group of web servers, but may also divide up other computational tasks like database queries and data mining. It prevents a single server from becoming overwhelmed while other servers in the cluster sit idle. By removing bottlenecks, proper load balancing helps improve the performance of servers hosting websites and web applications. If a single web server suddenly gets five or ten times as many requests per second as it usually does, it may struggle to keep up with that demand. It may serve content slower than usual or even crash under the load. Balancing those requests between multiple servers helps keep websites and web applications available and responsive. Load balancing even works across several locations, so if one server's Internet connection is saturated, requests can go to another data center to use bandwidth more effectively. Without load balancing, some servers may be overloaded while others sit idle Whether load balancing happens on a local network or in a large data center, it requires dedicated hardware or software to divide incoming traffic between servers. These hardware devices and software programs are known as load balancers. When it receives incoming traffic, a load balancer distributes requests using a load-balancing algorithm. Static algorithms distribute tasks in a predetermined pattern like a round-robin sequence, or in a randomized order. Dynamic algorithms instead consider each server's current load to route tasks to the least busy servers. Dynamic balancing across a distributed CDN may also consider geography by routing requests to the server nearest to the client, reducing latency by reducing the distance data packets need to travel. Updated November 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/load_balancing Copy

Localhost

Localhost "Localhost" is the default hostname that refers to the local computer that a program is running on. It uses the IP address of 127.0.0.1 — also known as a "loopback" address because it routes information sent to it back to the local machine. Developers use "localhost" to test websites and software by pointing their web browser to the local computer instead of a remote server. The localhost IP address 127.0.0.1 is a special address that is never assigned to any device by a network's router; it always points to the current device whenever requested. The name "localhost" is also reserved as a top-level domain name, not assignable by domain registrars, which prevents websites from hijacking and spoofing a local connection. "Localhost" is used by developers testing websites and applications to fake a connection to another server. For example, if you are running web server software on a computer while building a website, you can point your web browser to "localhost" to view the in-progress site. The connection does not need to travel over a network, but the browser treats it like any other domain name. Updated February 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/localhost Copy

Localization

Localization Localization is the process of adapting a product for a specific region and language. It may include several steps, but the most common is language translation. In the context of IT, localization typically applies to software and websites. Software Localization Software localization involves translating all user interface elements into a specific language. Translated items include menu items, buttons, instructions, help documentation, and templates. Each localized element has an ID instead of a single text string, which is associated with multiple translations. When the user installs a localized app, the installer detects the system language and loads the correct text string for each ID. Localized apps often provide a way to override the system language. You may find this option by opening the Preferences window and clicking the Language option. Localizing an app can provide developers with new markets where English is not the primary language. Even in countries where most people speak English, users generally prefer to use apps in their native language. While large developers such as Microsoft and Apple translate software into dozens of languages, smaller developers may limit translations to the most widely-used. Some examples include: Spanish French German Chinese Japanese Website Localization The "World Wide Web" is accessible by people around the world and not everybody speaks English. Therefore, websites with a high number of international visitors may offer content translated in multiple languages. Similar to software programs, websites can detect the user's system language (or browser language setting) and display the corresponding translation. Some websites have a language option (often at the top or bottom of each page) that allows users to select their preferred language from a list. No matter how localization is performed from a technical standpoint, the quality is only as good as the translations themselves. Therefore, large web publishers and software companies work with reputable translation companies to provide accurate localized content. NOTE: If a website does not offer a translation in your language, you may be able to use your browser's built-in translate feature (such as Google Chrome's "Translate this page" option). Automatic webpage translation may not be as accurate as manually translated content, but it can help you understand the text. Updated April 23, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/localization Copy

Log File

Log File A log file is a file that contains a list of events, which have been "logged" by a computer. Log files are often generated during software installations and are created by Web servers, but they can be used for many other purposes as well. Most log files are saved in a plain text format, which minimizes their file size and allows them to be viewed in a basic text editor. Log files created by software installers typically contain a list of files that were added or copied to the hard drive during the installation process. They may also include the date and time the files were installed, as well as the directory where each file was placed. Installer log files allow users to see exactly what files were installed on their system by a specific program. This may be useful when troubleshooting program crashes or when uninstalling a program. Web servers use log files to record data about website visitors. This information typically includes the IP address of each visitor, the time of the visit, and the pages visited. The log file may also keep track of what resources were loaded during each visit, such as images, JavaScript, or CSS files. This data can be processed by website statistics software, which can display the information in a user-friendly format. For example, a user may be able to view a graph of daily visitors during the past month and click on each day to view more detailed information. Log files may also be generated by software utilities, FTP programs, and operating systems. For example, Mac OS X saves a log of all program and system crashes, which can be viewed using the built-in Console application. Windows records application, security, and system logs, which can all be viewed using the included Event Viewer program. Most log files have a ".log" file extension, but many use the standard ".txt" extension or another proprietary extension instead. File Extension: .LOG Updated April 14, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/logfile Copy

Log On

Log On The verb "log on" refers to the process of accessing a secure computer system or website. When you log on to a system, you provide "login" information that authenticates you as a user. This information typically includes a username and password, though some logins require extra information, such as a PIN number or a correct answer to a security question. Most computer systems can be configured to require users to log on before accessing the system. For example, both Mac and Windows systems can be set up to require all users to enter a username and password each time the computer is restarted or woken up from sleep mode. Similarly, when you access a remote system through a remote access or FTP connection, you may be required to log on to the system. When you create an account on a secure website, you are typically required to log on to the site each time you want to access your account information. Examples include online banking websites, social networking sites like Facebook and LinkedIn, online Web forums, and a variety of other sites. Any website that allows you to create a unique user account will require you to log on in order to access your personal information. The only time you may not have to log on to one of these sites is if your login information has been stored in a cookie by your Web browser. NOTE: While the terms "log on" and "login" sound similar, they have separate meanings. "Log on" is a verb and refers to the process of accessing a secure system. "Login" is a noun that refers a user's login information, such as a username and password combination. However, the phrase "log in" (two words) is an acceptable alternative to "log on," and the two terms can be used synonymously. Updated February 8, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/log_on Copy

Logic Error

Logic Error A logic error (or logical error) is a mistake in a program's source code that results in incorrect or unexpected behavior. It is a type of runtime error that may simply produce the wrong output or may cause a program to crash while running. Many different types of programming mistakes can cause logic errors. For example, assigning a value to the wrong variable may cause a series of unexpected program errors. Multiplying two numbers instead of adding them together may also produce unwanted results. Even small typos that do not produce syntax errors may cause logic errors. In the PHP code example below, the if statement may cause a logic error since the single equal sign (=) should be a double equal sign (==). Incorrect: if ($i=1) { ... } Correct: if ($i==1) { ... } In PHP, "==" means "is equal to," while "=" means "becomes." Therefore, the incorrect if statement always returns TRUE, since assigning 1 to the variable $i returns a TRUE value. In the correct code, the if statement only returns TRUE if $i is equal to 1. However, since the syntax of the incorrect code is acceptable, it will not produce a syntax error and the code will compile successfully. The logic error might only be noticed during runtime. Because logic errors are often hidden in the source code, they are typically harder to find and debug than syntax errors. Updated April 27, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/logic_error Copy

Logic Gate

Logic Gate Logic gates perform basic logical functions and are the fundamental building blocks of digital integrated circuits. Most logic gates take an input of two binary values, and output a single value of a 1 or 0. Some circuits may have only a few logic gates, while others, such as microprocessors, may have millions of them. There are seven different types of logic gates, which are outlined below. In the following examples, each logic gate except the NOT gate has two inputs, A and B, which can either be 1 (True) or 0 (False). The resulting output is a single value of 1 if the result is true, or 0 if the result is false. AND - True if A and B are both True OR - True if either A or B are True NOT - Inverts value: True if input is False; False if input is True XOR - True if either A or B are True, but False if both are True NAND - AND followed by NOT: False only if A and B are both True NOR - OR followed by NOT: True only if A and B are both False XNOR - XOR followed by NOT: True if A and B are both True or both False By combining thousands or millions of logic gates, it is possible to perform highly complex operations. The maximum number of logic gates on an integrated circuit is determined by the size of the chip divided by the size of the logic gates. Since transistors make up most of the logic gates in computer processors, smaller transistors mean more complex and faster processors. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/logicgate Copy

Login

Login A login is a set of credentials used to authenticate a user. Most often, these consist of a username and password. However, a login may include other information, such as a PIN number, passcode, or passphrase. Some logins require a biometric identifier, such as a fingerprint or retina scan. Logins are used by websites, computer applications, and mobile apps. They are a security measure designed to prevent unauthorized access to confidential data. When a login fails (i.e, the username and password combination does not match a user account), the user is disallowed access. Many systems block users from even trying to log in after multiple failed login attempts. Examples of logins include: Operating system login – Windows and Mac systems can be configured to require a login in order to use the computer after it is turned on or woken from sleep mode. A login may also be required to install software or modify system files. Website login – Webmail interfaces, financial websites, and many other sites require a username and password in order to access account information. App store login – App stores like Google Play and Apple's App Store require a login to download mobile apps, music, and other files. FTP login – file transfer programs often require a login in order to browse, send, and receive files from an FTP server. Router login – Wired and wireless routers typically require an administrator login to modify the settings. At a basic level, logins make user accounts possible. Most systems require unique usernames, which ensures every user's login is different. On a more advanced level, logins provide a security layer between unsecured and secure activity. Once a user logs in to a secure website, for example, all data transfers are typically encrypted. This prevents other systems from viewing or recording the data transferred from the server. NOTE: Login is technically a noun, not a verb. To enter a username and password is to "log in," "sign in," or "log on" (two words). Updated August 12, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/login Copy

Long

Long Long is a data type used in programming languages, such as Java, C++, and C#. A constant or variable defined as long can store a single 64-bit signed integer. So what constitutes a 64-bit signed integer? It helps to break down each word, starting from right to left. An integer is a whole number that does not include a decimal point. Examples include 1, 99, or 234536. "Signed" means the number can be either positive or negative, since it may be preceded by a minus (-) symbol. 64-bit means the number can store 263 or 18,446,744,073,709,551,616 different values (since one bit is used for the sign). Because the long data type is signed, the possible integers range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, including 0. In modern programming languages, the standard integer (int) data type typically stores a 32-bit whole number. Therefore, if a variable or constant may potentially store a number larger than 2,147,483,647 (231 ÷ 2), it should be defined as a long instead of an int. NOTE: In standard C, a long integer may be limited to a 32-bit value ranging from -2,147,483,648 to 2,147,483,647. Updated December 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/long Copy

Loop

Loop In computer science, a loop is a programming structure that repeats a sequence of instructions until a specific condition is met. Programmers use loops to cycle through values, add sums of numbers, repeat functions, and many other things. Loops are supported by all modern programming languages, though their implementations and syntax may differ. Two of the most common types of loops are the while loop and the for loop. While Loop A while loop is the simplest form of a programming loop. It states that while a condition is valid, keep looping. In the PHP example below, the while loop will continue until i is equal to num.    $i = 1;    $num = 21;    while ($i < $num)  // stop when $i equals $num    {        echo "$i, ";        $i++;   // increment $i    } If $i is 1 and $num is 21, the loop will print out 1, 2, 3, 4… etc. all the way to 20. Then the loop will stop or "break" after 20 iterations because the while condition has been met. For Loop A for loop is similar to a while loop, but streamlines the source code. The for loop statement defines the start and end point as well as the increment for each iteration. Below is the same loop above defined as a while loop.    $num = 21;    for ($i = 1; $i < $num; $i++)  // stop when $i equals $num    {        echo "$i, ";    } Though for loops and while loops can often be used interchangeably, it often makes more sense to use one over the other. In most cases, for loops are preferred since they are cleaner and easier to read. However, in some situations, a while statement can be more efficient. For instance, the following PHP statement can be used to load all the values from a MySQL result into an array using only one line of code.    while ($row = mysql_fetch_array($result)) NOTE: Since loops will repeat until a given specific condition is met, it is important to make sure the loop will break at some point. If the condition is never met, the loop will continue indefinitely creating an infinite loop. Writing code that allows infinite loops is bad programming practice, since they can cause programs to crash. Updated February 3, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/loop Copy

Lossless

Lossless Lossless compression is a type of media compression that shrinks a file's size without any quality loss. A file shrunk with lossless compression is indistinguishable from the original. Lossless media formats save storage space while maintaining the full fidelity of the original file. For example, an image saved as a lossless PNG preserves every pixel of the original. A lossless FLAC audio file includes waveforms identical to the original audio sample. Professional photo and video editors often choose lossless formats to prevent artifacting and maintain as much quality as possible. Lossless compression preserves details better than lossy compression How Lossless Compression Works Unlike lossy compression, which discards certain parts of a file to create a smaller and lower-fidelity version of the original, lossless compression saves file space by rewriting the data more efficiently. Many lossless algorithms look through a file for any pattern that repeats, then save that pattern and reference it later instead of writing the same data multiple times. Many lossless media compression algorithms are similar to ones used for compressing documents and other files. For example, PNG compression is a derivative of the same algorithm that ZIP files use. NOTE: Lossless compression usually cannot reduce a file's size by as much as a lossy compression algorithm can. FLAC audio compression can often achieve a 40-50% reduction in file size, while a lossy MP3 file can reduce the original file size by 90%. Updated December 21, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lossless Copy

Lossy

Lossy Lossy file compression is a form of media compression that shrinks a file by discarding some of its information, creating a lower-fidelity approximation of the original file at a significantly smaller file size. The resulting images, audio, and videos may exhibit artifacts where the original information was lost. People use lossy compression algorithms to compress image files (such as JPEGs), audio files (like MP3s or AACs), and video files (like MP4s). These compression methods attempt to discard information that most people won't see or hear to create a much smaller version of the original. For example, a JPEG image may take up less than 20% of the disk space of the original image with little noticeable effect; a compressed MP3 file may be one-tenth the size of the original audio file and may sound almost identical. A lossy JPEG image at 400% zoom The data lost during lossy compression can become evident when closely examined. For example, a highly-compressed JPEG image may have duller colors, blurry details, and blocky artifacts; the high frequencies in a compressed MP3 audio file may sound warbly; an MP4 video may show blocky artifacts in dark areas or when there's a lot of motion on the screen. Most lossy compression algorithms allow for various "quality settings," which adjust how compressed the file will be. They involve a trade-off between quality and file size. Compressing a file with a higher compression level will take up less space but may include more artifacts than a less compressed file. Once you apply lossy compression to a file, you cannot un-compress it to restore any lost details. It's wise to create a compressed copy of a file when you need one and preserve the uncompressed original when possible. NOTE: Some image and audio formats allow lossless compression, which does not reduce the file's quality but will result in larger file sizes. Updated December 20, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/lossy Copy

Low-Level Language

Low-Level Language A low-level language is a type of programming language that contains basic instructions recognized by a computer. Unlike high-level languages used by software developers, low-level code is often cryptic and not human-readable. Two common types of low-level programming languages are assembly language and machine language. Software programs and scripts are written in high-level languages, like C#, Swift, and PHP. A software developer can create and edit source code in a high-level language using a programming IDE or even a basic text editor. However, the code is not recognized directly by the CPU. Instead, it must be compiled into a low-level language. Assembly language is one step closer to a high-level language than machine language. It includes commands such as MOV (move), ADD (add), and SUB (subtract). These commands perform basic operations, such as moving values into memory registers and performing calculations. Assembly language can be converted to the machine language using an assembler. Machine language, or machine code, is the lowest level of computer languages. It contains binary code, often generated by compiling high-level source code for a specific processor. Most developers never need to edit or even look at machine code. Only programmers who build software compilers and operating systems need to view machine language. Updated February 14, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/low-level_language Copy

LPI

LPI Stands for "Lines Per Inch." LPI is used to measure the resolution of images printed in halftones. Because halftone images are printed as a series of dots, the higher the LPI number, the more dense the dots can be, resulting in a finer resolution. Newspapers are typically printed in a resolution of 85 lpi, while magazines may use 133 lpi or higher. Because the naked eye can distinguish halftone dots up to about 120 lpi, you are more likely to notice the dots in newspaper print than in magazines. Of course, if you look closely enough, you may be able to see the dots in images printed in 150 lpi or more. But, in normal viewing, it is natural to see the dots as a continuous image even at 85 lpi. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lpi Copy

LTE

LTE Stands for "Long Term Evolution." LTE is a 4G telecommunications standard used for transferring data over cellular networks. It supports data transfer rates of up to 100 Mbps downstream and 50 Mbps upstream. While the terms "4G" and "LTE" are often used synonymously, LTE is actually a subset of 4G technologies. Other 4G standards include Mobile WiMAX (IEEE 802.16e) and WirelessMAN-Advanced (IEEE 802.16m). However, since LTE was designed to be an international telelcommunications standard, it has been widely adopted by the United States and several countries in Europe and Asia. Therefore, LTE is by far the most common 4G standard. In order to use LTE, you must have an LTE-compatible device, such as a smartphone, tablet, or USB dongle that provides wireless access for a laptop. When LTE first became available in 2009, most devices did not yet support the technology. However, most major cellular provides now offer LTE and therefore most phones and other mobile devices are now LTE-compatible. Many devices, such as iPhones and iPad will display the letters "LTE" in the status bar at the top of the screen when you are connected to an LTE network. If "4G" is displayed instead, LTE may not be available in your area. NOTE: LTE may also refer to LTE Advanced, a newer version of LTE that supports peak download speeds of 1 GBps and upload speeds of 500 Mbps (10x the speeds of standard LTE). Updated November 21, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lte Copy

Lua

Lua Lua is a programming language commonly used to extend, or add functionality, to software applications. It uses simple syntax (similar to C), allowing it to work with a number of other languages. Lua has a wide array of uses, but it is commonly used as an extension language for web applications and video games. Lua is a scripting language, since the code can be parsed and run on-the-fly by an interpreter. However, like Python Lua code can also be compiled into an executable binary file. Pre-compiling a Lua script with the luac compiler can improve performance, so Lua code is often compiled when it is embedded in an existing program. While Lua is designed to be simple and portable, it is also a highly capable language. Lua supports a wide variety of data types, including strings, floating point numbers, and arrays. It also includes many different operators for performing calculations, comparisons, and logic operations. Lua also supports functions, loops, and if-then statements. The language even includes garbage collection for automatic memory management. Since Lua is relatively lightweight – meaning it does not use more system resources than necessary – it is an ideal language for performing basic tasks. For example, a programmer may code a video game in C++, but use Lua to manage the characters and objects within the game. The simplicity of Lua can make it easier to code individual aspects of a program without the overhead other languages require. File extension: .LUA Updated June 19, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lua Copy

LUN

LUN Stands for "Logical Unit Number." LUNs are used to identify SCSI devices, such as external hard drives, connected to a computer. Each device is assigned a LUN, from 0 to 7, which serves as the device's unique address. LUNs can also be used for identifying virtual hard disk partitions, which are used in RAID configurations. For example, a single hard drive may be partitioned into multiple volumes. Each volume can then be assigned a unique LUN. However, few modern computers use LUNs, since SCSI devices have mostly been replaced by USB and Firewire devices. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/lun Copy

M.2

M.2 M.2 is a form factor for small internal expansion cards that plug directly into a slot on a computer's motherboard. It is most common in solid-state drives (SSDs) but is also often used to add Wi-Fi, Bluetooth, or NFC expansion cards. The specification provides a computer bus interface for up to four PCI Express lanes, a single SATA port, or a USB connection. An M.2 card looks similar to a memory module, but with connector pins found on a short side instead of a long side. While the M.2 format supports storage devices using a legacy SATA port, it is most commonly associated with NVMe SSDs. NVME drives utilize multiple PCI Express lanes for higher bandwidth and lower latency, offering read-write speeds up to 3.5 GBps (compared to 600 MBps over SATA). Keys and Sizes An M.2 slot on a motherboard is keyed with one or two notches, referred to by letters that indicate the location of the notch—the "A" notch removes pins 8-15, "B" removes 12-19, "E" removes 24-31, and "M" removes 59-66. The notches indicate what type of card that slot supports. Slots keyed A or E support wireless cards that add Wi-Fi, Bluetooth, or NFC radios. Slots keyed B or M support SSDs at different speeds—slots keyed M support faster drives than slots keyed B. A Crucial NVMe SSD, keyed M and sized 2280 (top), and an Intel Wi-Fi card, keyed A+E and sized 2230 (bottom) M.2 cards come in a standard size range, denoted by a 4- or 5-digit number. The first two digits indicate the width in millimeters, while the final two or three indicate the length. For example, a small Wi-Fi card may have a size of 2230 (22 mm wide and 30 mm long), while an SSD may have a size of 2280 or 22110. When installing an M.2 card, the end of the card must screw into a corresponding hole on the motherboard, so it is necessary to know what sizes the motherboard supports before installing a card. Updated October 24, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/m2 Copy

M1

M1 The M1 is the first Apple silicon chip. Apple launched the M1 in November 2020 with the Mac mini and MacBook Air. Unlike previous Mac chips, the M1 is designed and manufactured by Apple. It is ARM-based, meaning it uses a design similar to the "A-series" chips found in iPhones and iPads, rather than the Intel "x86" architecture. Additionally, the M1 is a System on a Chip (SoC), which contains several types of processors. Besides the CPU, the M1 includes a GPU, neural engine, media engine, and "high-efficiency" processing cores that help reduce power consumption. Not only does the M1 contain multiple types of processors, but most of them are multi-core. Below are the M1 processor specs: CPU: 8-core (4 high-performance cores running at a 3.2 GHz clock speed + 4 high-efficiency cores) GPU: 8-core (7-core on the low-end model) Neural Engine: 6-core (for AI and ML computations) Transistors: 16 billion (5 nm) RAM: 8 or 16 GB of unified (integrated) memory L2 Cache : 12 MB per performance core; 4 MB per efficiency core NOTE: Apple released two high-performance variants of the M1 — the M1 Pro and M1 Max — with new MacBook Pros in October 2021. Updated February 12, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/m1 Copy

MAC Address

MAC Address Stands for "Media Access Control Address." A MAC address is a hardware identification number that uniquely identifies each device on a network. Every network interface card, such as an Ethernet card or a Wi-Fi adapter, has a permanent MAC address assigned by its manufacturer; some operating systems allow an adapter's MAC address to be temporarily changed, or "spoofed," in software. Routers and other network infrastructure equipment use both IP and MAC addresses to track which devices connect to a network; an IP address represents the software connection, while a MAC address represents the hardware connection. A network administrator can configure a network to reject connections from specific MAC addresses or restrict network access to a set of approved MAC addresses. Since a MAC address uniquely identifies a particular network adapter, there must be enough unique addresses for every device that needs one. The address format uses a 48-bit identifier, formatted as six hexadecimal numbers separated by colons, that provides more than 281 trillion potential addresses. For example, an Ethernet card may have a MAC address of 00:0d:83:b1:c0:8e. NOTE: When a computer or mobile device connects to a public Wi-Fi access point — or even scans for available networks — its MAC address may be recorded and possibly used to track it as it moves around and connects to other networks. To increase privacy, some operating systems now allow you to automatically create a randomly-generated temporary MAC address when connecting to new Wi-Fi networks. Updated December 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/macaddress Copy

Machine Language

Machine Language Machine language, or machine code, is a low-level language comprised of binary digits (ones and zeros). High-level languages, such as Swift and C++ must be compiled into machine language before the code is run on a computer. Since computers are digital devices, they only recognize binary data. Every program, video, image, and character of text is represented in binary. This binary data, or machine code, is processed as input by the CPU. The resulting output is sent to the operating system or an application, which displays the data visually. For example, the ASCII value for the letter "A" is 01000001 in machine code, but this data is displayed as "A" on the screen. An image may have thousands or even millions of binary values that determine the color of each pixel. While machine code is comprised of 1s and 0s, different processor architectures use different machine code. For example, a PowerPC processor, which has a RISC architecture, requires different code than an Intel x86 processor, which has a CISC architecture. A compiler must compile high-level source code for the correct processor architecture in order for a program to run correctly. Machine Language vs Assembly Language Machine language and assembly language are both low-level languages, but machine code is below assembly in the hierarchy of computer languages. Assembly language includes human-readable commands, such as mov, add, and sub, while machine language does not contain any words or even letters. Some developers manually write assembly language to optimize a program, but they do not write machine code. Only developers who write software compilers need to worry about machine language. NOTE: While machine code is technically comprised of binary data, it may also be represented in hexadecimal values. For example, the letter "Z," which is 01011010 in binary, may be displayed as 5A in hexadecimal code. Updated January 18, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/machine_language Copy

Machine Learning

Machine Learning Machine learning, commonly abbreviated "ML," is a type of artificial intelligence (AI) that "learns" or adapts over time. Instead of following static rules coded in a program, ML technology identifies input patterns and contains algorithms that evolve over time. Machine learning has a wide variety of applications, many of which are now part of everyday life. Below are a few examples: Medical diagnoses Autonomous vehicles Online ad targeting – Google AdSense and Facebook Advertising Speech recognition – Google Assistant, Amazon Alexa, Microsoft Cortana, and Apple Siri Image recognition – Google image search, facial recognition on Facebook and in Apple Photos Self-Driving Vehicle Example Autonomous vehicles incorporate machine learning to improve their safety and reliability. A self-driving car that uses traditional artificial intelligence can respond to any road conditions it has been programmed to handle. However, if the software encounters unrecognized input, the car may default to a backup safety measure, such as slowing down, stopping, or requiring a manual override. Machine learning can enable a vehicle to recognize events and objects that have not been explicitly programmed in the source code. For example, a car may be programmed to recognize street lights, but not flashing lights on construction barricades. By learning from experience — possibly recording the driving behavior of a human driver — the car will start to recognize construction barriers and respond accordingly. ML technology is what enables autonomous vehicles to differentiate between objects on the road, such as cars, bikes, humans, and animals. It also helps automobiles drive more reliably in imperfect weather conditions and on roads without clear lines. The goal is to enable vehicles to drive like humans while avoiding mistakes caused by human error. Updated June 10, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/machine_learning Copy

Macintosh

Macintosh Mac (formerly Macintosh) is a line of desktop and laptop computers developed by Apple. Each Mac computer runs a version of macOS, Apple's computer operating system. As of 2023, nearly the entire lineup runs on custom-designed Apple Silicon processors. The original Macintosh, released in 1984, was the first personal computer to have a graphical user interface or GUI. It was a small beige all-in-one machine with a 9-inch monochrome CRT display, sold with a bundled mouse and keyboard. Over the years, Apple has released many new models of Macs in many different form factors, including desktop computers, portable laptops, and all-in-one models. When the original iMac was introduced, the product line's name also changed from Macintosh to just Mac. The current Mac line (as of 2023) includes the following models: MacBook Air - A lightweight, portable computer designed for general use and travel MacBook Pro - A more-powerful portable computer aimed at pro users iMac - An all-in-one desktop designed for home, business, and educational users Mac Mini - A compact system unit designed for general use, popular as a home / small business server Mac Studio - A small, high-power desktop computer aimed at professional users. Mac Pro - A professional full-tower system unit, aimed at professional users that need an expandable computer NOTE: While Macs are technically personal computers, the term "PC" is often reserved for computers that run Windows or Linux; for decades, "Mac vs PC" was a hot subject of online debate. Unlike PCs, which can be built by different companies and packed with a licensed version of Windows, Apple designs and manufactures all models of Mac. Updated February 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/macintosh Copy

macOS

macOS macOS is a desktop operating system developed by Apple that runs on the Mac line of personal computers. It was the first mainstream operating system to feature a graphical user interface and mouse input. The macOS desktop interface is called the Finder. It includes the persistent menu bar at the top of the screen and the main desktop below that. The Finder also serves as the file manager for macOS. You can open folders in windows to browse and organize your files. The Dock at the bottom of the screen shows icons for frequently-used applications, as well as those that are currently running. Apple bundles several other applications with macOS, including the Safari web browser, an email client, and various system utilities. The history of macOS is split into two eras. The first era is represented by the original Macintosh system software (now known as Classic Mac OS). It included the system software from the first Macintosh in 1984 and all subsequent updates through the final release, Mac OS 9 in 1999. The second era started with the release of Mac OS X in 2001. Mac OS X 10.0 and all later versions are instead an entirely separate operating system based on Unix. While the underlying technology is fundamentally different, the user interface maintains a similar look and feel. Classic Mac OS The original Macintosh System Software introduced several of the same concepts that macOS still features, such as the Finder file manager, the menu bar, and the use of a mouse to navigate the GUI. New releases added features like support for multitasking, networking, hard drives, and color monitors. The final version of Classic Mac OS was released in 1999. Mac OS X During the 1990s, Apple made several attempts at developing a next-generation version of Mac OS that failed to produce results. Apple purchased NeXT Computer in 1997 to acquire its NeXTSTEP operating system, which served as a foundation for the next generation of Mac OS. Mac OS X 10.0, released in 2001, offered a multi-user environment, protected memory, and preemptive multitasking. A compatibility layer, called the Classic environment, allowed Mac OS X users the option to run some Classic Mac OS applications. Over a series of releases, Mac OS X improved performance and added many new features. It supported the transition of the Mac line from PowerPC to Intel processors using a compatibility layer called Rosetta. A variant of Mac OS X was used on the iPhone, which eventually became iOS and iPadOS. Later versions also run on Apple Silicon processors, which Apple released in 2020. Updated February 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/macos Copy

Macro

Macro A macro is a series of recorded actions saved by a program to repeat later. Macros can help computer users save time by automating repetitive tasks. Some programs include built-in macro support to record and replay actions within that program, while other utilities can record and replay actions in other programs. Computer users can record their own macros or download and run macros from other sources. A macro is similar to a script, as both contain a series of tasks for a program to run. While a script generally consists of program commands executed by a command line, a macro includes user interactions with parts of the screen. For example, a macro can record you entering a text in a text field, opening a menu, selecting a command, then clicking a button in a program window. Office applications like Word and Excel include support for macros, allowing you to record a series of actions — entering a return address or formatting data in a certain way — and replay those actions with a single keyboard shortcut or button press. You can edit macros after recording them to add, remove, or modify steps. You can also write existing macros from scratch. Not all programs use the word "macro" to refer to recorded actions — for example, Photoshop allows you to record a series of commands that you can run from a menu but instead calls them "actions." Some keyboards and mice designed for gamers and power users ship with software that allows you to record a macro, then assign it to a special key or button. Other software utilities let you record actions system-wide, then create a custom keyboard shortcut to trigger the macro that you can run no matter which program you're using. NOTE: The word "macro" may also refer to macro photography, a method for taking photographs of a small object at close range. Updated March 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/macro Copy

Macro Photography

Macro Photography Macro photography is close-up photography of small objects that depicts them as larger than life-sized. Digital cameras have long been capable of macro photography through special macro photography lenses and settings modes. Smartphones with multiple-lens camera systems are also capable of macro photography through dedicated macro settings on wide-angle lenses. Setting up a digital camera for macro photography involves a mix of both hardware and software settings. Cameras that support interchangeable lenses can use special macro lenses that achieve high magnification levels. Close-up lenses that attach to the front of another camera lens are also an option, increasing the total magnification with some potential quality loss due to the stacking of lenses. Some digital cameras also include a macro mode that you can enable that automatically sets the aperture, shutter speed, and ISO to their ideal levels for close-up photography. Some modern smartphones also support macro photography using dedicated camera modes. Smartphones with multi-camera systems, like the iPhone 13 Pro, Samsung Galaxy S21 Ultra, and the Google Pixel 7 Pro, can capture macro photos using their ultra-wide-angle lenses. Just like a digital camera's macro mode, a smartphone's macro mode automatically changes some camera settings to optimize for close-up photographs of small subjects. Some camera apps automatically switch to macro mode when focusing on a subject at short range, making macro photography as easy as any other smartphone photography. NOTE: Macro mode on a digital camera or smartphone is often indicated by a small flower icon. Updated August 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/macro_photography Copy

Mail Server

Mail Server A mail server (or email server) is a computer system that sends and receives email. In many cases, web servers and mail servers are combined in a single machine. However, large ISPs and public email services (such as Gmail and Hotmail) may use dedicated hardware for sending and receiving email. In order for a computer system to function as a mail server, it must include mail server software. This software allows the system administrator to create and manage email accounts for any domains hosted on the server. For example, if the server hosts the domain name "techterms.com," it can provide email accounts ending in "@techterms.com." Mail servers send and receive email using standard email protocols. For example, the SMTP protocol sends messages and handles outgoing mail requests. The IMAP and POP3 protocols receive messages and are used to process incoming mail. When you log on to a mail server using a webmail interface or email client, these protocols handle all the connections behind the scenes. Mail server software is available for multiple platforms. The most popular mail server for Windows is Microsoft Exchange Server, an enterprise product used by large businesses. However, many other options exist, including Ipswitch IMail Server, IceWarp Mail Server, MailEnable, and hMailServer. Popular Linux options include Exim for sending mail and Dovecot and Courier for receiving mail. Updated October 2, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mail_server Copy

Mainframe

Mainframe A mainframe computer is a powerful computer meant for high-volume data processing. Mainframes have a large amount of memory, many parallel processors, and a high data input/output capacity. Large organizations and business use mainframes for high-volume data analysis, database management, resource tracking, and transaction processing. Mainframe computers are considered the second-most powerful variety of computers after supercomputers. A key element of a mainframe computer's design is a focus on reliability and uptime. Mainframes need to be scalable so that organizations can add more capacity when needed. They are also designed with redundant parts so that any failure does not disrupt the mainframe's operations. Administrators can even hot-swap components in and out of a mainframe without turning it off, which allows most mainframes to go years without a reboot. Support for advanced virtualization lets a single mainframe run thousands of virtual machines. All this helps a mainframe handle mission-critical, high-volume tasks; for example, banks use mainframes to process thousands of transactions per second, all day, every day. Mainframes use specialized hardware unlike anything available in consumer desktop computers. They use specially-designed processors with a high number of processor cores, designed to work in parallel with up to 32 processors at once. They also run specialized mainframe operating systems, like IBM's z/OS or mainframe variants of Linux. Updated November 2, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mainframe Copy

Malware

Malware Short for "Malicious Software." Malware is a category of software programs designed to damage or do other unwanted actions to a computer system. Hackers generally develop malware for some criminal purpose — like stealing protected information or taking control of a computer to add it to a botnet — although some create malware solely to spread damage. Anti-virus and anti-malware software can monitor a computer to prevent malware from infecting it. There are several common types of malware, categorized by how they infect a computer and what they do once in place. For example, worms and trojan horses are types of viruses defined by how they spread; worms spread automatically, while trojan horses trick a computer user into installing malware. Spyware, ransomware, and rootkits are instead defined by what they do. Spyware spies on you to steal your personal information like banking info or passwords, ransomware encrypts your files to hold them for ransom, and rootkits provide a hacker with root-level access to your computer. Anti-virus and anti-malware utilities can help protect your computer against malware, but only if you install and run them. Current versions of Windows include a built-in anti-virus utility, while other operating systems support third-party anti-virus software instead. However, be careful and only install anti-virus software from a trusted source; fake anti-virus software that actually includes malware is increasingly common. Updated January 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/malware Copy

MAMP

MAMP Stands for "Mac OS X, Apache, MySQL, and PHP." MAMP is a variation of the LAMP software package that can be installed on Mac OS X. It can be used to run a live web server from a Mac, but is most commonly used for web development and local testing purposes. Apache (or "Apache HTTP Server") is the component used to configure and run the web server. Once installed, Apache enables a Mac to host one or more websites. By configuring and running a local Apache web server, web developers can view their webpages in a web browser without publishing to an external server. MAMP also includes MySQL and PHP. These two components are common (open source|open source) technologies used for creating dynamic websites. MySQL is a popular DBMS and PHP is a web scripting language. Webpages that include PHP code can access data from a MySQL database and load dynamic content on-the-fly. By installing PHP and MySQL locally, a developer can build and test a dynamic website on his Mac before publishing it on the Internet. Apache, MySQL, and PHP are open source components that can be installed individually. However, it is faster and easier to install a pre-built "AMP" package like MAMP or MAMP Pro. Both MAMP and MAMP Pro also include a graphical user interface GUI that can be used to manage the local web server. NOTE: In some cases, the "P" in MAMP may stand for Perl or Python, which are other scripting languages that can be used in place of PHP. The Windows version of LAMP is called WAMP. Updated May 23, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mamp Copy

MAN

MAN Stands for "Metropolitan Area Network." A MAN is a network that spans a large area, such as a town or city. It is larger than a campus area network (CAN), but smaller than a wide area network (WAN). An example of a MAN is a series of wireless routers distributed across a city. The routers are almost always linked to an Internet connection, which allows public users to connect to the Internet once they connect to the MAN. They are also bridged together, meaning each access point has the same name and authentication method. Once you connect your device to one of the routers, it should automatically connect to routers in other locations within the MAN. There is no hard limit to the size range of a MAN, but most are between 3 and 30 miles in diameter. Some campus area networks are larger than 3 miles in diameter, but they are technically not MANs. CANs typically provide more security and faster speeds than a MAN. They are also more likely to limit access to public users. Metropolitan area networks are sometimes considered wide area networks since they span large areas. However, a MAN is a single network, not several interconnected networks, which is what comprises a WAN. Updated December 1, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/man Copy

MANET

MANET Stands for "Mobile Ad-Hoc Network." A MANET is an ad-hoc network that connects a group of mobile devices together into a decentralized wireless mesh network. A MANET can move around, add and remove mobile devices, and reconfigure itself on the fly. These networks often use standard wireless technologies like Wi-Fi, cellular, and Bluetooth, although some specialized equipment uses UHF and VHF radio frequencies. By using peer-to-peer networking instead of relying on a single master node, MANETs can distribute network traffic that might otherwise overload a centralized router. Every device is able to route traffic to its destination by sending it from one device to another. If any device in the mesh has an Internet connection, it can bridge traffic between the rest of the network and the Internet. This way, even if most devices are out of range of a cellular tower, they can still reach the Internet through the MANET. However, distributed networks like MANETs have limitations. Data hopping between multiple nodes introduces a lot of latency and limits the speed data can travel. Constant network activity over a MANET also uses significant power, which can drain batteries on mobile devices faster than normal use. Even though MANETs have drawbacks, they are helpful in situations that benefit from their decentralized nature: Vehicular ad-hoc networks (VANETs) allow certain vehicles to communicate wirelessly with each other and with roadside equipment. Smartphone ad-hoc networks (SPANs) create a mesh between smartphones for communication when infrastructure (like existing Wi-Fi and cellular networks) is unavailable or overloaded. These networks are often used during large gatherings like conferences, music festivals, and even large-scale protests. Rescue workers often use MANETs to coordinate activities during natural disasters when existing infrastructure has been damaged or destroyed. Military units can use MANETs for tactical wireless communication without relying on vulnerable infrastructure. Unmanned aerial vehicles can use MANETs to coordinate movements, and naval vessels can use ship-to-ship wireless networks that operate faster than satellite networks. Updated January 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/manet Copy

Margin

Margin Margin is a page layout term used in both print and Web publishing. In print, "margin" typically refers to page borders, while on the Web it describes the spacing between elements on a webpage. 1. Printed Pages When you create a new document in a word processor, the default margins are usually defined as one inch on all sides. However, depending on the program, the side margins may vary anywhere from 0.75 in to 1.25 inches. These margins create a frame around the content of the page so that the text does not run all the way to the edges. The white space along the edges of the document makes the page look cleaner and the text is easier to read. In many cases, it is not necessary to edit the page margins. However, some documents, such as school papers, may require specific margins. In these cases, you can edit the margins in your word processor's Document Formatting window. This window can typically be accessed by selecting Format → Document from the program's menu bar. In Microsoft Word, you can edit page margins using the Layout tab in the ribbon located above the document. 2. Webpages Margin is also a CSS property used for creating space between HTML "block" elements on webpages. It can specify the top, right, bottom, and left margins around an object. Below are several examples of how margins can be specified in CSS. margin: 12px; — 12 pixel margin around all sides of the element margin: 4px 6px; — top and bottom margins: 4 pixels; right and left margins: 6 pixels margin: 10px 15px 20px; 5px; — top: 10px, right: 15px, bottom: 20px, left: 5px margin margin-right: 30px; — 30 pixel margin on the right side of the element Margins can also be negative values, which may be used to overlay an object on another element within a webpage. In some cases, margins may also be defined as "auto." For example, the CSS declaration "margin: 0px auto;" is often used to center elements on webpages. Updated October 3, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/margin Copy

Markup Language

Markup Language A markup language is a computer language that uses tags to define elements within a document. It is human-readable, meaning markup files contain standard words, rather than typical programming syntax. While several markup languages exist, the two most popular are HTML and XML. HTML is a markup language used for creating webpages. The contents of each webpage are defined by HTML tags. Basic page tags, such as , , and define sections of the page, while tags such as , , , and define elements within the page. Most elements require a beginning and end tag, with the content placed between the tags. For example, a link to the TechTerms.com home page may use the following HTML code: TechTerms.com XML is used for storing structured data, rather than formatting information on a page. While HTML documents use predefined tags (like the examples above), XML files use custom tags to define elements. For example, an XML file that stores information about computer models may include the following section:   Dell   XPS 17        2.00 GHz Intel Core i7     6GB     1TB    XML is called the "Extensible Markup Language" since custom tags can be used to support a wide range of elements. Each XML file is saved in a standard text format, which makes it easy for software programs to parse or read the data. Therefore, XML is a common choice for exporting structured data and for sharing data between multiple programs. NOTE: Since both HTML and XML files are saved in a plain text format, they can be viewed in a standard text editor. You can also view the HTML source of an open webpage by selecting the "View Source" option. This feature is found in the View menu of most Web browsers. Updated June 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/markup_language Copy

Martech

Martech Martech combines the words "marketing" and "technology." It covers a wide range of technologies used in online marketing. Martech is related to adtech (advertising technology) but focuses on marketing services rather than digital ads. Examples of martech include: CRM (Customer Relationship Management) software Email marketing systems Social media marketing technologies Online tracking tools User analytics SEO (Search Engine Optimization) Martech software, such as CRM SaaS applications and email delivery services, enable marketing teams to reach targeted audiences. For example, instead of mass-emailing (e.g., spamming) recipients, martech makes it possible to send emails only to those most interested in receiving them. The software is often part of a CRM suite, which companies use to maintain a database of existing customers and potential leads. Complex algorithms assign values, e.g., likelihood to purchase, to people in the database, allowing companies to make effective marketing decisions. Cross-site and cross-platform tracking enable companies to track user behavior across websites and devices. For example, if you visit an e-commerce website on your smartphone, cross-platform tracking allows the company to display personalized recommendations on your laptop. Ever wonder how you get an email about something you left in your online shopping cart? You can thank martech for that. NOTE: Because martech often tracks user activity, online marketing services must follow data collection and usage guidelines, such as GDPR. Organizations that collect and store data often publish a privacy policy, which explains how they use the data. Updated February 26, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/martech Copy

Mashup

Mashup The term "mashup" has several meanings. It was originally used to describe songs that meshed two different styles of music into one song. For example, a classic rock song put to a well-known hip-hop beat may be considered a mashup. It is also used to describe videos that have been compiled using different clips from multiple sources. For example, a skateboarding movie created from several different skateboard videos found would be considered a video mashup. A mashup also describes a Web application that combines multiple services into a single application. For example, a Web forum may contain a mashup that uses Google Maps to display what parts of the world the users are posting from. Yahoo offers a mashup called Yahoo! Pipes that aggregates RSS feeds into a single page that can be navigated using a graphical interface. The primary purpose of most Web mashups is to consolidate information with an easy-to-use interface. Because the combinations of Web applications are limitless, so are the possibilities of mashups. Therefore, as mashups continue to evolve, don't be surprised to see them popping up on your favorite websites. After all, we can always use new tools that help make information easier to find! Updated October 25, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mashup Copy

Matrix

Matrix A matrix is a grid used to store or display data in a structured format. It is often used synonymously with a table, which contains horizontal rows and vertical columns. While the terms "matrix" and "table" can be used interchangeably, matrixes (or matrices) are considered more flexible than tables. For example, tables generally have a fixed number of rows and columns, while the size of a matrix may change dynamically. The term "matrix" may also be used to refer to a table that has groups of columns within a single row. In computing, a matrix may be used to store a group of related data. For example, some programming languages support matrixes as a data type that offers more flexibility than a static array. By storing values in a matrix rather than as individual variables, a program can access and perform operations on the data more efficiently. In mathematics, matrixes are used to display related numbers. Math matrixes are usually presented as a list of numbers within square brackets. A matrix that only contains one row is called a row vector, while a matrix that contains a single column is called a column vector. A matrix that contains the same number of rows as columns (for example, a 4x4 matrix) is called a square matrix. NOTE: While the term "matrix" has technical meanings in both computing and mathematics, it is also used in everyday speech to describe structured data. For example, a comparison chart may also be called a comparison matrix because it contains a list of values in a grid format. Updated July 2, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/matrix Copy

Mavericks

Mavericks Mavericks (also called OS X 10.9) is the tenth version of OS X, Apple's desktop operating system. It supersedes OS X 10.8 (Mountain Lion) and was released on October 22, 2013 as a free update for all OS X users. OS X Mavericks is the first free version of OS X (formerly Mac OS X) and also the first version that does not reference a feline in its name. While "Mavericks" is a significant change in the nomenclature of OS X, the user interface remains similar to previous versions of the operating system. For example, Mavericks is still based on the Finder, which includes a menu bar, dock, and a desktop background. It also runs nearly all applications designed for previous versions of OS X. Though Mavericks is not significantly different than Mountain Lion, it does includes several new features. For example, in previous versions of OS X, the menu bar was only displayed on one screen, even if a Mac was connected to multiple displays. In OS X 10.9, the menu bar now shows up at the top of all connected monitors and corresponds to the front application open on that display. Additionally, each screen can be used in full-screen mode at the same time, which was not possible before. Mavericks is the first version of OS X to include Finder Tabs, which provides a tabbed interface for open Finder windows. It also supports Tags, or the capability to tag individual files with certain words, which provides a new way or organizing documents. Besides these system updates, Mavericks is the first version of OS X to include iBooks and Maps, which were both previously only available on iOS devices. It also includes new power-saving features that use energy more efficiently. For example, the "Timer Coalescing" groups low-level operations together, which allows the CPU to enter a low-power state more often. "App Nap" slows down background applications and "Safari Power Saver" prevents browser plug-ins from running outside the active window. These improvements, while not obvious to the user, reduce power consumption and increase battery life on Apple laptops. Updated February 14, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mavericks Copy

Maximize

Maximize When you maximize a window on your computer screen, it becomes larger. In Windows, maximizing a window makes it take up the entire screen. In Mac OS X, a maximized window typically only takes up as much space as it needs. To maximize a window in Windows, click the button with a square icon next to the close button in the upper-right corner of the window, or double-click the title bar. The maximized window should expand to fill the screen. Once the window is expanded, the maximize button may change to a "restore" button. If you click the restore button or double-click the title bar again, the window will change back to its previous size. To maximize a window in Mac OS X, click the green button in the upper-left corner of the window. The window will expand to take up as much space as it needs to show the window's contents. If you click the maximize button again, the window will revert to its previous size. The maximize feature in Mac OS X is also called "zoom." On both Windows and Macintosh computers, the maximize button is located next to the minimize button, which hides the window. Updated April 11, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/maximize Copy

Mbps

Mbps Stands for "Megabits per second." Mbps can refer to a measurement of data transfer speeds over a network, or to the bit rate of a video file. Networking Mbps is a measure of data transfer speeds, expressing how many megabits can travel over a network connection per second. It is commonly used to measure network and Internet bandwidth. ISPs advertise their broadband Internet service speeds in Mbps. Extremely fast connections, with speeds of more than 1,000 Mbps, may instead be measured in Gbps, or gigabits per second. Different network technologies transfer data at different speeds. For example, some Ethernet connections are limited to 100 Mbps, while others can connect at 1 Gbps or more. A Wi-Fi router that supports the 802.11g standard can send and receive data at 54 Mbps, while a newer router that supports 802.11ac does so at more than 430 Mbps. A typical broadband internet connection can range from 20 Mbps over a DSL connection, 300 Mbps over cable, or more than 940 Mbps over a fiber optic line. Video Mbps is a measure of digital video bit rate in megabits per second, or the amount of data required to store one second of a video file. A higher bit rate means a higher-quality video with less compression. For example, an HD video stream at 6 Mbps will be clearer, with fewer compression artifacts, than a low-quality video encoded at 1 Mbps. A 4K movie streamed online may have a bitrate of 20 Mbps, but that same movie on a 4K Blu-ray disc can be encoded at a bit rate of 128 Mbps since it is not limited by Internet bandwidth. NOTE: Mbps is not to be confused with MB/s, or megabytes per second. Updated September 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mbps Copy

MBR

MBR Stands for "Master Boot Record." An MBR is a small section of a hard disk or other storage device that contains information about the disk. It is located in the boot sector and defines the disk partitions as well as the code used to start the boot sequence. There are several variations of MBRs, but they are all 512 bytes in size and contain a partition table and bootstrap code. Modern master boot records may contain disk signatures, timestamps, and other disk formatting information. Additionally, while early MBRs only supported four partitions, newer versions can support up to sixteen. However, all master boot records are limited to 512 bytes, which means they can only address up to two terabytes of data. Therefore, disks that are formatted using an MBR are limited to 2TB of usable disk space. The master boot record was first used by PC-DOS compatible computers in 1983 and was the standard way to format DOS disks for over two decades. However, most hard drives are now using the GUID partition table (GPT), which is compatible with multiple operating systems and does not have a 2TB limit. NOTE: If necessary, a master boot record can be added to a GUID partition table for backwards compatibility. Updated August 30, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mbr Copy

MCA

MCA Stands for "Micro Channel Architecture." MCA was a type of expansion bus developed by IBM in 1987 for its PS/2 line of personal computers. It supported expansion cards, such as video cards, sound card, and network interface cards. MCA memory expansion cards could house memory modules, allowing users to add RAM without replacing the motherboard. There were two different versions of the MCA expansion bus: 16-bit, with 116 pins 32-bit, with 172 pins 16-bit MCA memory expansion card IBM introduced MCA as a successor to the older AT (Advanced Technology) and ISA (Industry Standard Architecture) buses. It offered faster data transfer rates and a more streamlined design. However, MCA remained a proprietary technology, which limited its adoption. IBM's decision to keep the MCA architecture proprietary meant it was not compatible with the cards used by other manufacturers. Consequently, the industry gravitated towards more universally compatible standards like PCI (Peripheral Component Interconnect) and AGP(Accelerated Graphics Port), which became prevalent in the 1990s and beyond. While MCA represented a significant technological step forward, its limited adoption showed how proprietary standards can struggle in a competitive marketplace. Remember Sony Memory Stick? Updated January 21, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mca Copy

MDI

MDI Stands for "Medium Dependent Interface." MDI is a type of Ethernet port found on network devices. It is often used in contrast with MDIX (or "MDI-X"), which is similar to MDI, but switches the transmit and receive pins within the interface. When connecting two devices with an Ethernet cable, it is important that the transmit pins on one end line up with the receive pins on the other. If you connect a device with an MDI port to a device with an MDIX port, a standard straight through Ethernet cable will do the job. However, when connecting two devices with MDI ports, such as two computers, an Ethernet crossover cable is required. The crossover cable switches the send and receive ports on the two connectors, allowing data to flow correctly between two MDI or MDIX ports. Straight-through and crossover Ethernet cables may look identical, but you can tell the difference by looking at the colors of the wires on each end. In a straight-through Ethernet cable, the colors will be the same on both ends, while in a crossover cable, the colors will be arranged differently. On some network devices, the ports are labeled MDI or MDIX to help you choose the right type of cable. Routers often have multiple MDIX ports and a single MDI port called an "uplink" port designed to connect a device such as a cable modem. If you connect a device to the wrong port or use the wrong cable, it may not be able to communicate with the router. Fortunately, Hewlett Packard developed a technology called Auto-MDIX (or "MDI/MDIX Auto Cross") that automatically switches between MDI and MDIX as needed. Today, most 10Base-T and 100Base-T network devices have Auto-MDIX ports, so it does not matter if you connect other devices using straight through or crossover cables. Additionally, Giagbit Ethernet (1000-BaseT) does not define specific send/receive pins, so it is not necessary to identify MDI or MDIX ports when connecting two Gigabit Ethernet devices. Updated April 2, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mdi Copy

MDM

MDM Stands for "Mobile Device Management." MDM refers to the administration of a group of mobile devices, such as smartphones, tablets, and laptops. It allows a company or organization's IT staff to remotely manage devices from a central dashboard instead of supporting each device in person. MDM for businesses and organizations is typically SaaS provided by third-party vendors instead of device manufacturers. Organizations install MDM on devices using a combination of software applications and system configuration settings. Once set up, IT staff can handle device management tasks remotely. For example, it can help ensure that system software and other applications are updated on a schedule or automatically install new applications when needed. It can help IT staff remotely troubleshoot support issues and even keep track of who is using a particular device and where it is. MDM is also important when it comes to an organization's information security. First, IT staff can take steps to secure devices without the assistance of the users. Remote management limits vulnerabilities by ensuring software is updated, security patches installed, and unvetted software blocked. It can even enforce the use of a VPN. Finally, it allows for lost or stolen devices to be located or, if necessary, remotely disabled and their data erased. NOTE: While the issues that MDM solves are more of a concern in businesses and organizations, many devices and operating systems include some basic MDM features for personal use. These features can help find a lost device or remotely disable or delete it if necessary. Updated October 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mdm Copy

Mebibyte

Mebibyte A mebibyte (MiB) is a unit of data storage equal to 220 bytes, or 1,048,576 bytes. Like kibibyte (KiB) and gibibyte (GiB), mebibyte has a binary prefix to remove any ambiguity when compared to the multiple possible definitions of a megabyte. A mebibyte is similar in size to a megabyte (MB), which is 106 bytes (1,000,000 bytes). A mebibyte is 1,024 kibibytes. 1,024 mebibytes make up a gibibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a megabyte could mean different numbers depending on what was measured — memory in binary values and storage space in decimal values. Using mebibyte to refer specifically to binary measurements helps to address the confusion. NOTE: Windows operating systems calculate storage and file sizes using mebibytes (MiB) and gibibytes (GiB), but use the labels for megabytes (MB) and gigabytes (GB). MacOS uses actual megabytes and gigabytes when calculating sizes. The same disk or file will show slightly different values in each operating system — for example, a file that is exactly 1,000,000 bytes will appear as 976.56 KB in Windows and 1 MB in MacOS. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 10, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mebibyte Copy

Media

Media The word "media" has many uses in the context of computers and digital communication. It may refer to digital media content, like images and video, that has been published for broadcast or digital distribution. The collective noun "media" often refers to the companies that make and distribute digital media content. "Media" may also refer to digital storage media that stores and transports data. Digital Media Content The word "media," in the context of digital media content, refers broadly to digital images, audio, video, and text. More specifically, it refers to content created for publishing and wide distribution, either online or through other electronic methods. Songs saved as MP3 files are digital media, as are digital photographs and illustrations; a short story or essay published as a webpage is also digital media. "Media files" is a catch-all term for music, movies, television shows, and other images, audio, and video files. A "media library" is an organized collection of media files, often collected into a database. "Media players" are software applications designed for the playback of digital media. Storage Media Digital storage media is anything that can electronically store digital data. Storage media can take several forms, including removable and non-removable types. The most common forms of storage media are: Magnetic media, like hard disks, floppy disks, and tape drives, save digital data by manipulating the medium's magnetic charge. Optical media, like CDs, DVDs, and Blu-ray discs, store digital data encoded in physical peaks and valleys, etched in the reflective underside of the disc and read by lasers. Solid-state media, like flash memory drives, store data in tiny cells called floating-gate transistors that can hold an electrical charge. CDs, DVDs, flash drives, and SD cards are all kinds of digital media Each kind of storage media has its benefits. Hard drives are less expensive per gigabyte than solid-state flash memory but cannot read or write data nearly as quickly. Optical media is cheap and ideal for producing many copies of a read-only disc and is widely used to distribute movies and music. Flash memory is extremely fast and portable (since it has no moving parts), which makes it useful for removable storage devices like USB flash drives and SD memory cards. Updated August 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/media Copy

Media Compression

Media Compression Media compression reduces the file size of audio, video, and image files through the use of media-specific compression algorithms. Unlike standard file compression algorithms like ZIP or RAR, which reduce file size while perfectly preserving the file data, media compression algorithms are designed specifically for the type of data they compress. Most popular image file formats use some form of media compression. JPEG compression is a lossy method designed for photos that reduces subtle color variations to lessen the complexity of the image. GIF compression uses indexed color to reduce an image's color palette to 256 colors or fewer. PNG image compression uses a lossless algorithm that filters the image data and predicts the color of a pixel based on nearby pixels. Audio compression codecs also attempt to remove imperceptible data to save file size. Lossy audio compression formats like MP3 and AAC remove unnecessary parts of an audio file — sounds too quiet or high-pitched for the human ear to hear. Lossless audio codecs like FLAC instead identify patterns in an audio signal that can be compressed and recreated perfectly. Lossy codecs result in significantly smaller file sizes, but lossless codecs preserve the entire audio signal while modestly reducing file size. A video compressed using the HEVC video codec and the AAC audio codec Since video files are much larger than images and audio files, most video codecs use lossy compression to save space. Video files may contain multiple data streams — the video itself, multiple audio tracks, and subtitle and caption tracks. Video codecs like H.264, H.265, AV1, and VP9 use a technique called inter-frame compression, which analyzes the differences between frames and only encodes the areas that change. Meanwhile, audio tracks use standard audio compression formats like AAC. Finally, compressed video and audio streams are combined into a container format like MKV, MP4, or AVI for distribution. NOTE: In order to play back compressed media, a computer must have the necessary codecs installed. Related file extensions: .JPG, .GIF, .PNG, .MP3, .M4A, .AAC, .MP4, .MKV, .AVI. Updated April 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/media_compression Copy

Media Queries

Media Queries Media queries are a feature of CSS that enable webpage content to adapt to different screen sizes and resolutions. They are a fundamental part of responsive web design and are used to customize the appearance of websites for multiple devices. Media queries may be inserted within a webpage's HTML or included in a separate .CSS file referenced by the webpage. Below is an example of a simple media query: @media screen and (max-width: 768px) {   header { height: 70px; }   article { font-size: 14px; }   img { max-width: 480px; } } The media query above activates if a user's browser window is 768 pixels wide or less. This could happen if you shrink your window on a desktop computer or if you are using a mobile device, such as a tablet, to view the webpage. In responsive web design, media queries act as filters for different screen sizes. Like all modules within a cascading style sheet, the ones that appear further down the list override the ones above them. Therefore, the default styles are typically defined first within a CSS document, followed by the media queries for different screen sizes. For example, the desktop styles might be defined first, followed by a media query with styles for tablet users, followed by a media query designed for smartphone users. Styles may also be defined in the opposite order, which is considered "mobile first" development. While min-width is by far the most common feature used in media queries, many others are available as well. Examples include min-device-width, min-device-height, aspect-ratio, max-color-index, max-resolution, orientation, and resolution. The resolution value, for instance, may be used to detect HiDPI displays (such as retina displays) and load high-resolution graphics instead of standard images. Updated June 15, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/media_queries Copy

Medtech

Medtech Medtech, short for "medical technology," is an umbrella term that covers all technologies used for medical purposes. Related terms include biotech, edtech, and fintech. While technology has been part of medicine for hundreds of years, computers have made several new medical advancements possible. Examples include: Electronic patient records Online access to health information Data sharing across medical facilities Digital x-rays, MRIs, and CT scans Telehealth, such as remote doctor visits Digital health data is not just convenient; it also provides useful analytics. Data from thousands — even millions — of anonymous individuals can provide valuable insights into health trends or changes. These findings may reveal correlations between lifestyles and specific diseases and help determine what treatments are most effective. In recent years, wearable electronics have played an increasingly significant role in medtech. Devices like fitness bands and smartwatches record daily activity and heart rate. Most wearable electronics communicate via Bluetooth to a connected smartphone, providing an easy way to access and monitor personal health data. Updated July 31, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/medtech Copy

Megabit

Megabit A megabit is 106 or 1,000,000 bits. One megabit (abbreviated "Mb") is equal to 1,000 kilobits. There are 1,000 megabits in a gigabit. It is important to distinguish megabits (Mb) from megabytes (MB), as there are 8 megabits in a single megabyte. 800 megabits are equal to 100 megabytes. Megabits per second (Mbps) is commonly used to measure data transfer rates of broadband Internet connections. For example, an ISP may offer cable modem Internet access with download speeds up to 200 Mbps and upload speeds up to 20 Mbps. Fiber connections can provide even faster speeds. For example, 500/500 fiber Internet connections are available in certain areas within the U.S. and Europe. This type of connection provides symmetrical downstream and upstream data transfers at up to 500 megabits per second. While megabits are used to measure Internet speeds, megabytes are commonly used to measure file size. This difference can make it tricky to determine how long a download will take to complete. You must first divide the download speed by eight in order to convert megabits per second to megabytes per second. For example, if you download a 100 megabyte file at a speed of 40 Mbps, it will take 20 seconds. 40 Mbps / 8 = 5 MBps.   →   100 MB / 5 MBps = 20 seconds. Updated September 29, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/megabit Copy

Megabyte

Megabyte A megabyte is 106 or 1,000,000 bytes. One megabyte (abbreviated "MB") is equal to 1,000 kilobytes and precedes the gigabyte unit of measurement. While a megabyte is technically 1,000,000 bytes, megabytes are often used synonymously with mebibytes, which contain 1,048,576 bytes (220 or 1,024 x 1,024 bytes). Megabytes are often used to measure the size of large files. For example, a high resolution JPEG image file might range in size from one to five megabytes. Uncompressed RAW images from a digital camera may require 10 to 50 MB of disk space. A three-minute song saved in a compressed format may be roughly three megabytes in size, and the uncompressed version may take up 30 MB of space. While CD capacity is measured in megabytes (typically 700 to 800 MB), the capacity of most other forms of media, such as flash drives and hard drives, is typically measured in gigabytes or terabytes. NOTE: For a list of other units of measurement, view this Help Center article. Updated October 16, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/megabyte Copy

Megahertz

Megahertz One megahertz (abbreviated: MHz) is equal to 1,000 kilohertz, or 1,000,000 hertz. It can also be described as one million cycles per second. Megahertz is used to measure wave frequencies, as well as the speed of microprocessors. Radio waves, which are used for both radio and TV broadcasts, are typically measured in megahertz. For example, FM radio stations broadcast their signals between 88 and 108 MHz. When you tune to 93.7 on your radio, the station is broadcasting at a frequency of 93.7 MHz. Megahertz is also used to measure processor clock speeds. This measurement indicates how many instruction cycles per second a processor can perform. While the clock speeds of processors in mobile devices and other small electronics are still measured in megahertz, modern computer processors are typically measured in gigahertz. Abbreviation: MHz. Updated March 3, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/megahertz Copy

Megapixel

Megapixel A megapixel is a unit of measurement equal to one million pixels. Megapixel count is typically used to describe the total number of pixels a digital camera's sensor can capture and the resolution of the resulting digital photographs. A camera sensor with a higher megapixel count can capture larger, more detailed images. An image or sensor's megapixel count is found by multiplying its number of pixels horizontally by its number of pixels vertically, then dividing by a million. For example, the camera sensor in an iPhone 11 captures images at a resolution of 4,032 x 3,024, which results in 12,192,768 total pixels or roughly 12 megapixels. A Canon EOS R50, on the other hand, captures larger photos at a resolution of 6,000 x 4,000 to produce 24-megapixel images. Higher-resolution images require more storage space on the camera's SD card or internal storage. A photo taken by an iPhone 11, which uses a 12-megapixel camera sensor When viewing images on a computer or mobile device's screen, you may not be able to notice all of the detail captured by a high-megapixel sensor. For example, a 10-megapixel image completely fills a 4K display, so going up to 24 or 48 megapixels requires zooming out to see the whole thing, losing some detail as pixels merge together. A higher resolution is more helpful when printing photographs; for example, a 24-megapixel image printed at 300 DPI is roughly the size of a standard 12" x 18" photo. A high megapixel count becomes very important when editing and cropping photos since you start with as much image data as possible; you can always resize an image to make it smaller and easier to share with others later. While a higher megapixel count generally means a more detailed image, other factors also significantly affect the final results. The quality of the lenses and other optics in the camera affects how sharp the image that reaches the sensor is. The sensor's physical size (independent of its pixel count) affects how much light it receives, reducing noise and producing better low-light photos. Adjustable camera settings like shutter speed and ISO can affect an image's sharpness and brightness. Finally, external factors like the amount and quality of light are always crucial to capturing a good photo. Updated September 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/megapixel Copy

Meme

Meme A meme is a concept or behavior that spreads from person to person. Examples of memes include beliefs, fashions, stories, and phrases. In previous generations, memes typically spread within local cultures or social groups. However, now that the Internet has created a global community, memes can span countries and cultures across the world. Memes that are propogated online are called "Internet memes." Examples of behavioral Internet memes include using Facebook and checking email several times a day. While Facebook and email are now part of many people's daily life, both technologies are relatively new. They expanded through technological and social evolution to become commonplace in many cultures. Now Facebook and email, which are memes themselves, are common mediums used to spread Internet memes. Another type of Internet meme is "chat slang," which includes acronyms and abbreviations used online. Over the past several years, Internet users have developed ways to communicate faster by creating new shorthand. Abbreviations like BTW and LOL are part of an entirely new lexicon created by Internet users. Emoticons, or text-based expressions, such as =) and 😛 have become popular memes as well. Internet memes may also be subjects, such as people or animals made popular by blogs or other websites. For example, a politician or celebrity involved in a public scandal may become an Internet meme thanks to numerous bloggers who publish their thoughts on the story. A household pet may even become a meme if it stars in a YouTube video that goes viral. These types of Internet memes are typically short-lived, but they are part of the larger meme of social networking, in which people participate in online communities. Updated November 28, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/meme Copy

Memory

Memory Computer memory refers to any medium that stores digital data to be accessed and used by a computer. The term often refers to a computer's primary system memory, or RAM, that temporarily stores active programs and files. However, it may also refer to a computer's storage devices like hard drives, solid-state drives, and removable USB flash drives. Random access memory, or RAM, is a computer's primary system memory. When a computer boots up, it loads the operating system into RAM, allowing it to run programs and accept user input. Opening a program or a file also loads it into RAM, letting the CPU run the program and read and modify the file. RAM is faster than storage disks, allowing the CPU to read its contents with less delay. However, RAM is a type of volatile memory that does not preserve its contents while powered off. Turning off a computer (or restarting it) clears the RAM entirely, so anything not saved to a disk is lost. A stick of Crucial RAM A computer's CPU accesses the RAM over a dedicated series of wires on the motherboard, called a bus. Since the memory bus provides a direct connection between the RAM and CPU, the CPU can read data from RAM with little latency. The CPU also includes a small amount of memory, called a cache, built directly into the CPU chip. This cache is even faster than the system RAM but is limited in size, so it only stores the instructions telling the CPU what to do and other frequently-used data. Read-only memory, or ROM, is a non-volatile memory chip built into every computer and component. ROM chips keep their contents in a read-only state that can only be rewritten under certain conditions (depending on the type of ROM chip). ROM is not meant for temporary storage but instead stores data like a device's firmware, bootloader, and BIOS / UEFI — all of which contain instructions that are necessary for a computer to boot up. Storage devices like hard drives, solid-state drives, and USB flash drives are also known as secondary memory. These devices provide longer-term data storage for files, applications, and the computer's operating system. Secondary memory is slower than system RAM but is available in much larger sizes. The CPU does not have direct access to the contents of secondary memory, so files and applications are copied into RAM when opened. Updated December 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/memory Copy

Memory Bank

Memory Bank A memory bank is a designated section of computer memory used for storing data. Much like financial bank, a memory bank serves as a repository for data, allowing data to be easily entered and retrieved. Banks are organized into logical units that are ordered consecutively, providing easy to access individual items. Memory banks are commonly used for caching data. By storing frequently used information in memory banks, the data can be accessed quickly and easily. This speeds up common tasks that are run within the operating system and other programs. Various applications may also use memory banks to store data while the program is running. Sets of physical memory modules may also be referred to as memory banks. However, to avoid confusion, these are usually called banks of memory. Updated July 15, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/memorybank Copy

Memory Leak

Memory Leak A memory leak is a type of programming error that causes a program to slowly consume system memory until none is left. A memory leak happens slowly, causing computer performance to gradually degrade until the program crashes or the entire operating system freezes. Quitting the program responsible for the leak frees up the system's memory and resolves the problem until the next time the program runs. Memory leaks happen when a program fails to release memory after it no longer needs it. When a program runs a process, it requests an amount of memory from the system to run that process. After that process finishes, the program should release that memory back to the system so that other programs may use it. If the program does not include the correct functions that free up memory (or those functions aren't used correctly), it will instead hold on to that memory indefinitely. When the program runs that process again later, it requests more memory from the system instead of using the memory it had already allocated, increasing the amount of RAM it uses until the system has nothing left for other functions. You can identify which app is causing a memory leak by looking at memory usage in the task manager A program that suffers from a memory leak will continue to leak memory every time it runs until the error is fixed. Thankfully, software development kits often include debuggers that can check a program's source code for memory leaks. Once the program's developer finds the error, they can update the program and release a patch. NOTE: Some programming languages like Java and Python use a memory management technique called "garbage collection" that automatically reclaims memory and can resolve many (but not all) memory leak errors. Updated October 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/memory_leak Copy

Memory Module

Memory Module A memory module is another name for a RAM chip. It is often used as a general term used to describe SIMM, DIMM, and SO-DIMM memory. While there are several different types of memory modules available, they all serve the same purpose, which is to store temporary data while the computer is running. Memory modules come in different sizes and have several different pin configurations. For example, the original SIMMs had 30 pins (which are metal contacts that connect to the motherboard). However, newer SIMM chips have 72 pins. DIMMs commonly come in 168-pin configurations, but some DIMMs have as many as 240 pins. SO-DIMMs have a smaller form factor than standard DIMM chips, and come in 72-pin, 144-pin, and 200-pin configurations. While "memory module" is the technical term used to describe computer memory, the terms "RAM," "memory," and "RAM chip" are just as acceptable. But remember, while memory terms may be interchangeable, the memory itself is not. This is because most computers only accept one type of memory. Therefore, if you decide to upgrade you computer's RAM, make sure the memory modules you buy are compatible with your machine. Updated February 11, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/memorymodule Copy

Memory Stick

Memory Stick Memory Stick is a type of flash memory developed by Sony. It is used to store data for digital cameras, camcorders, and other kinds of electronics. Because Memory Stick is a proprietary Sony product, it is used by nearly all of Sony's products that use flash media. Unfortunately, this also means Memory Stick cards are incompatible with most products not developed by Sony. Memory Stick cards are available in two versions: Memory Stick PRO and Memory Stick PRO Duo. Memory Stick PRO cards are 50mm long by 21.5mm wide and are 2.8mm thick. Memory Stick PRO Duo cards are 31mm long by 20mm wide and are only 1.6mm thick. High-speed versions of Memory Stick media support data transfer rates up to 80Mbps, or 10 MB/sec, which is fast enough to record high-quality digital video. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/memorystick Copy

Menu Bar

Menu Bar A menu bar is a user interface element that contains selectable commands and options for a specific program. In Windows, menu bars are typically located at the top of open windows. In OS X, the menu bar is always fixed at the top of the screen, and changes depending on what program is currently active. For Macs with multiple screens, OS X Mavericks (OS X 10.9) displays a different menu bar for the active application within each screen. While menu bar items vary between applications, most menu bars include the standard File, Edit, and View menus. The File menu includes common file options such as New, Open…, Save, and Print. The Edit menu contains commands such as Undo, Select All, Copy, and Paste. The View menu typically includes zoom commands and options to show or hide elements within the window. Other menu bar items may be specific to the application. For example, a text editor may include a Format menu for formatting selected text and an Insert menu for inserting pictures or other media into a document. A web browser may include a History menu for reviewing previously visited websites and a Bookmarks menu for viewing bookmarked webpages. Many programs also include Window and Help menus for selecting window options and viewing Help documentation. If you browse through the options in a menu bar, you’ll notice many of the items have symbols and letters next to them. These are keyboard shortcuts that allow you to perform commands in the menu by simply pressing a key combination. For example, the standard keyboard shortcut to save a file is Control+S (Windows) or Command+S (Mac). By pressing this key combination, you can quickly save an open document without even clicking the menu bar. While common commands often have keyboard shortcuts, other menu items may not have a shortcut associated with them. These items can only be selected by choosing the command or option within the menu bar. Updated December 18, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/menu_bar Copy

Mesh Wi-Fi

Mesh Wi-Fi A mesh Wi-Fi system is a series of Wi-Fi routers that work together to create a single wireless network. The goal is to provide a reliable and consistent wireless signal across a large home or workspace. A basic Wi-Fi network uses a single router to broadcast a wireless signal to multiple devices. While this functions well in a small area, the signal may not reach certain locations in a larger space. Obstructions, such as concrete walls and floors, can create "dead zones" where the signal strength drops significantly. The solution is to spread out a "mesh" of routers across the area. A mesh network consists of a primary router and one or more nodes or "points." The router and nodes work seamlessly together to create a single wireless network. By strategically placing nodes across a home or office, it's possible to eliminate dead zones and provide consistent Wi-Fi coverage across the area. As you move around a mesh network, it intelligently switches your connection to the router with the strongest signal, similar to how cellular towers work. Thanks to the improved range of Wi-Fi 6, most areas under 2,000 square feet only require a single router. A mesh Wi-Fi network with a single node generally covers up to 4,000 square feet, while a router with two nodes may be sufficient for areas up to 6,000 square feet. In a large building with multiple floors, a mesh network may require several dozen nodes for consistent coverage. Mesh Networks vs Range Extenders Range extenders, or repeaters, are an alternative way to boost the range of Wi-Fi coverage. Unlike mesh nodes, range extenders simply repeat the Wi-Fi signal and do not provide "intelligent" switching. They require separate networks for each repeater, which may each have a different wireless passwords. Because range extenders do not create a seamless network, Wi-Fi mesh systems are a more popular option. Updated October 6, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mesh_wi-fi Copy

Meta Search Engine

Meta Search Engine Meta search engines are search engines that search other search engines. Confused? To put it simply, a meta search engine submits your query to several other search engines and returns a summary of the results. Therefore, the search results you receive are an aggregate result of multiple searches. While this strategy gives your search a broader scope than searching a single search engine, the results are not always better. This is because the meta search engine must use its own algorithm to choose the best results from multiple search engines. Often, the results returned by a meta search engine are not as relevant as those returned by a standard search engine. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/metasearchengine Copy

Meta Tag

Meta Tag A tag is a special HTML tag that is used to store information about a web page, known as metadata, that is not displayed in the web browser. Some tags include technical instructions for the web browser about how to display the web page, while others contain information about the page and its author. The content of tags is also used when a search engine indexes a web page. The tags are found between the tags at the beginning of an HTML file, with the page title and linked resources. Each tag first specifies a type of information, then includes the content. One example is below. A tag's name specifies the type of metadata in the tag, followed by the content. In this example, the tag's name specifies that it is the page description, and the content includes the description that search engines will use. A web page will have several tags, but no more than one for each type of content. A few more common types of metadata are listed below. Keyword contains a list of keywords for the web page. After years of abuse in the SEO field, search engines no longer use keywords when prioritizing search results. Author can list the author of the web page's contents for posterity. Viewport gives the web browser instructions on how to scale the web page for different screen sizes. Robots contains instructions for how search engines should index the web page. Updated September 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/metatag Copy

Metadata

Metadata Metadata is data that describes other data. Digital files often include metadata to provide information about the file and its contents. For example, a digital image may include metadata like the date and time of the photograph, its location, and the camera and settings used. A text document's metadata may include metadata listing the author's name, the document's title, and a summary of its contents. Metadata does not usually appear when viewing a file, although it can often be displayed when needed. Some metadata can be modified while editing a file, like specifying an author's name; other metadata is automatically set by the computer's operating system or the software creating the file, like a file creation date. Most modern file systems use an index of file metadata for file searches. This index allows you to quickly search your computer for files using the information in file metadata. HTML files and websites frequently use metadata to provide information about the content of a webpage to search engines. HTML files contain meta tags that include information about the page — its author, a description, some keywords, or even special instructions that tell a web browser how to display the page contents. Search engines can use these tags when organizing and displaying search results. Updated December 6, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/metadata Copy

Metafile

Metafile Metafile is a generic term for a file format containing multiple types of data streams, along with descriptive metadata. A metafile may also be called a "container" or "wrapper" file. Archive files, library files, graphic files, and multimedia files are all common types of metafile. The term "metafile" is most often applied to graphics file formats containing both vector and raster image data. These files contain a set of instructions that tell the graphics renderer how to draw a vector shape and text, and how to use an embedded bitmap image. Windows operating systems use the Enhanced Metafile (EMF) format as its standard graphics metafile format, while macOS uses PDF. Other categories of metafile are often referred to using more specific terms instead. Archive files (like ZIP and RAR) compress any kind of data into a single file. Library files (like DLL) contain a combination of program code, data, and graphic resources. Multimedia container files (like AVI, MP4, and MKV) are a type of metafile that contain video, audio, and closed caption text data streams. Updated November 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/metafile Copy

Metaheuristic

Metaheuristic A heuristic is a set of rules for solving problems or making decisions. In computer science, heuristics are the foundation of algorithms. Metaheuristics are larger principles or guidelines that establish the heuristics used to create algorithms. Both heuristics and metaheuristics apply to computer programming. A heuristic applies to a specific problem, while a metaheuristic is a general guideline that is problem-independent. Developers use metaheuristics to produce consistent programming practices, while they develop heuristics for specific solutions. For example, a software development team may build a search engine using both metaheuristics and heuristics. Below are examples of each: Search Metaheuristics The following metaheuristics apply to all search engines: create an index of searchable data to improve search efficiency use a "fuzzy search" to search for terms similar to keywords entered by the user, instead of only exact matches order results by most relevant to least relevant Search Heuristics The following heuristics may apply to a specific type of search engine: produce search results that are most relevant to the user's location customize search results based on information stored in the user's account use search history to provide a list of autocomplete search phrases The first list above provides general guidelines for an effective search engine. The second list provides specific features a search engine should have. In some cases, heuristics and metaheuristics may overlap in their scope. The best way to distinguish between the two is to determine if it is a general rule (metaheuristic) or if it applies to a specific problem/solution (heuristic). Updated December 5, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/metaheuristic Copy

Metaverse

Metaverse The metaverse is a virtual world where people can interact, learn, work, shop, and create in a 3D environment. It combines virtual reality with an open-world multiplayer setting, similar to those offered by Roblox and Fortnite. While still in the early stages, the goal of the metaverse is to provide a virtual environment that simulates reality. It will allow people to explore an online space, meet other people, and build relationships. The metaverse will also support buying and selling virtual goods online. In many ways, the metaverse simulates real life, but it also provides an alternate reality. For example, users create avatars, or characters, to represent themselves in the virtual world. Each user's avatar is a personalized creation that may appear similar or completely different than the actual person. Metaverse combines the words "meta" and "universe." While "meta" has several different meanings, in "metaverse," it means something of a higher or second-order kind. In other words, the metaverse is another realm that exists outside the real world. Metaverse Requirements The idea of a metaverse has existed for decades, but only in recent years has technology made it possible. The metaverse itself is hosted by hundreds, or even thousands, of interconnected servers around the world. Each client, or user, who connects to the metaverse, must have the following technologies: High-speed Internet connection Fast GPU (graphics processor) VR (virtual reality) headset Metaverse Examples A prime example of a metaverse is the virtual world represented in the movie "Ready Player One," released in 2018. Online multiplayer games like Second Life, Fortnite, Minecraft, and Roblox have elements of the metaverse, but their worlds are self-contained and don't incorporate virtual reality. In 2021, Facebook changed the company name to "Meta." It plans to leverage its large social community and Oculus VR technology to create a new metaverse. Updated November 13, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/metaverse Copy

Method

Method A method is a subroutine attached to a specific class defined in the source code of a program. It is similar to a function, but can only be called by an object created from a class. In the Java example below, the method getArea is defined within the class rectangle. In order for the getArea method to be used by a program, an object must first be created from the rectangle class. class Rectangle {    int getArea(int width, int height)    {       int area = width * height;       return area;    } } Methods are an important part of object-oriented programming since they isolate functions to individual objects. The methods within a class can only be called by objects created from the class. Additionally, methods can only reference data known to the corresponding object. This helps isolate objects from each other and prevents methods within one object from affecting other objects. While methods are designed to isolate data, they can still be used to return values to other classes if necessary. If a value needs to be shared with another class, the return statement (as seen in the example above) can be used. Updated April 19, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/method Copy

MHL

MHL Stands for "Mobile High-definition Link." MHL is a technology designed for connecting mobile devices, such as smartphones and tablets, to TVs, displays, home theater systems, and automotive interfaces. It was developed and standardized by multiple companies that make up the MHL Consortium. These include Nokia, Samsung, Sony, Toshiba, and Lattice Semiconductor. An MHL connection provides three primary functions: It transmits a full HD audio and video signal It provides electrical power to mobile devices It supports RCP data transmissions 1. HD Signal The HD connection allows a smartphone or tablet to mirror its screen content on an HDTV in full resolution with almost no latency. The MHL 1 and MHL 2 standards support a maximum resolution of 1080p at 60 FPS. This makes it possible to play movies and games from a mobile device on an HDTV with no lag. MHL 3 supports 4K resolution at 30 FPS. A new variation of the standard, called superMHL, supports 8K video at 120 FPS. All MHL technologies support 8-channel high-definition audio. 2. Power When an MHL-compatible device is connected to a TV or other hardware that supports MHL, it can receive electrical power at up to 10W (MHL 3) or 40W (superMHL). Therefore an MHL connection can be used to charge a mobile device and will prevent it from running out of power during a video or gaming session. 3. Data Transmission The MHL standard supports data transmissions over the Remote Control Protocol (RCP). This means it can send and receive messages, such as play, pause, volume up/down, and other commands. RCP support means you can use your television remote to control your MHL-connected device. MHL Interface There is no such thing as an MHL cable. Instead, MHL operates over existing connectors. Android smartphones commonly have Micro-USB ports while TVs and other A/V devices often have HDMI ports. Therefore, an MHL-compatible Micro-USB to HDMI adapter is commonly required to connect the two devices. superMHL also supports USB-C, which provides more bandwidth than a Micro-USB cable. Updated February 18, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mhl Copy

Microblogging

Microblogging Microblogging is posting brief and often frequent updates online. Unlike traditional blogs, which are often hosted on a custom website, microblogs are typically published on social media sites like Twitter or Facebook. The most common microblogging platform is Twitter, which allows allows you to post updates of 140 characters or less. These updates, called tweets, may include hashtags, mentions (links to other Twitter users), or links to online resources, such as webpages, images, or videos. When you microblog using Twitter, your updates are seen by all users who have chosen to "follow" you. Microblogging on Facebook is more flexible than on Twitter, since you can post longer updates and include media directly in your posts. You can also share content with other users, similar to Twitter's "retweet" feature. Though Facebook makes it easy to post quick updates, its focus is more towards social networking than microblogging. Therefore, Twitter remains the most popular microblogging platform. While Facebook and Twitter dominate the microblogging scene, there are several other options available. One popular service is Tumblr, a website (owned by Yahoo!) that was designed specifically for microblogging. Tumblr allows you to easily insert photos, videos, quotes, and links into your posts and includes a "reblog" feature for sharing other users' posts. Another service is Google+, which is similar to Facebook, and allows you to post updates that can be seen by the public or specific user within Google+ circles. Instagram (owned by Facebook) is a microblogging platform designed for sharing images, while Vine allows you to share short videos. Updated March 12, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microblogging Copy

Microcomputer

Microcomputer A microcomputer is a computer designed for individual use. The term was introduced in the 1970s to differentiate desktop computer systems from larger minicomputers. It is often used synonymously with the term "desktop computer," but it may refer to a server or laptop as well. In the 1960s and 1970s, computers were much larger than today, often taking up several cubic feet of space. Some mainframe computers could even fill a large room. Therefore, the first computers that could fit on a desktop were appropriately labeled "microcomputers" in comparison to these larger machines. The first microcomputers became available in the 1970s and were used primarily by businesses. As they became cheaper, individuals were able to buy their own microcomputer systems. This led to the personal computer revolution of the 1980s, in which microcomputers became a mainstream consumer product. As microcomputers grew in popularity, the name "microcomputer" faded and was replaced with other more specific terms. For example, computers purchased for business purposes were labeled as workstations, while computers bought for home use became known as personal computers, or PCs. Eventually, computer manufacturers developed portable computers, which were called laptops. While computers have evolved a lot over the past few decades, these same terms are still used today. Updated April 11, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microcomputer Copy

Microcontroller

Microcontroller A microcontroller (or MCU for "microcontroller unit) is a small integrated circuit that controls an electronic device. It is similar to a microprocessor but includes memory and other integrated components. A typical microcontroller includes the following elements: CPU - ranging from a simple 4-bit processor to a full-blown 64-bit processor RAM - high-speed volatile memory used to store data temporarily Flash memory - a type of non-volatile memory used to store data with or without electrical power I/O interface - serial ports or other connectors for peripheral devices Wireless communications - transmitters or receivers for sending and receiving wireless signals Some MCUs are several inches long and wide. Others are extremely small, with a length and width of less than one millimeter. Because microcontrollers are integrated circuits, most are relatively flat. Below are examples of different types of microcontrollers: an image processor on a digital camera a digital-to-analog converter (DAC) in a powered speaker a temperature sensor in a washing machine or dryer a Wi-Fi transmitter in a smart appliance a motion tracker in a smartwatch Nearly all electronic devices include microcontrollers. Many include multiple MCUs that operate in tandem. An automobile, for example, may contain dozens of microcontrollers for sensors, communication, and user interface functions. Updated August 19, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microcontroller Copy

Microkernel

Microkernel A microkernel is a minimalistic kernel designed to be as small as possible. It contains only the basic code needed to communicate with hardware and load an operating system. Most modern kernels, sometimes called "monolithic kernels," contain several million lines of code. For example, the Linux 3.0 kernel includes over 15 million lines. Microkernels, on the other hand, generally contain less than 10,000 lines of code. They are able to maintain a small size by loading most system processes in the user mode rather than the kernel itself. A monolithic kernel may include device drivers, file system support, and inter-process communication (IPC) protocols for applications. A microkernel only includes basic system IPC protocols and memory management functions. Everything else is loaded in user mode (when a user logs on). This keeps the kernel size small and also provides a modular type of OS since custom drivers and file systems can be loaded by the kernel. Microkernels were popular in the 1980s because of the memory and storage limitations of early computer systems. While they are still used for some server OSes, most major operating systems, such as Windows and OS X, use monolithic kernels. Updated February 26, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microkernel Copy

MicroLED

MicroLED Stands for "Micro Light Emitting Diode." A microLED screen is a type of flat-panel display that uses an array of LEDs to create an image. Each pixel combines three microscopic RGB diodes capable of lighting themselves, eliminating the need for a separate backlight. Since each pixel can be brightened, dimmed, or turned off independently, microLED screens can offer significantly better contrast levels than LCD screens. MicroLED screens have several advantages over other flat-panel display technology like LCD and OLED. They are much thinner, consisting of only four layers of material. They require less energy since the LEDs are very energy-efficient. Since individual diodes can be turned off, microLED screens have the high contrast and deep black levels of OLED screens; however, the LEDs used can get significantly brighter than those in OLED screens and are not susceptible to screen burn-in. MicroLED screens are flexible and can bend and curve around surfaces. They are also modular, allowing smaller panels to be combined after production to create larger panels in a variety of shapes. The primary drawback of microLED screens is their extremely high cost, resulting from the complexity of reliably producing individual diodes at microscopic sizes. As of 2022, microLED screens are not yet commonly made for the consumer market; most microLED screens are both too large for comfortable use as a television or computer monitor and prohibitively expensive for most people. MicroLED technology is instead mostly used in commercial settings and the motion picture industry as components of virtual sets. How MicroLED Screens are Made A microLED screen has just four layers of components that make up the display, resulting in a thinner and lighter screen than other kinds of flat-panel. These layers include, from bottom to top: Substrate - the base layer that supports the rest of the panel, typically made of glass or plastic. Electrode - the layer that provides an electrical current to individual pixels, turning them on or off and changing their brightness. RGB Micro LEDs - separate microscopic red, green, and blue diodes are assembled into a matrix of pixels. These diodes can individually brighten or dim to create millions of different colors per pixel. Cover - a protective layer made of glass or plastic. Updated December 20, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/microled Copy

Micron

Micron A micron is a small unit of measurement that measures length. It is another name for "micrometer," which is one thousandth of a millimeter, or one millionth of a meter. An object that is only a single micron wide is not visible by the human eye. However, a human hair, which has a width around 50 micron, can been seen. Microns are used in many industries, such as biology (e.g., to measure cell size) and chemical engineering (e.g., to measure particle filtration). They are also used in computing to measure the size of elements within integrated circuits and other components. Additionally, manufacturing specifications may state the dimensions of a product must fall within a certain micron number to meet quality control standards. While a micron is extremely small, it is still too large to measure some computing components such as transistors and bus widths. These are often measured in nanometers, which are one thousandth of a micron, or one billionth of a meter. NOTE: The term "micron" used is in both singular and plural cases. For example, one micron is half the length of two micron. While the term "micron" is still used in some industries, micrometers have become a more standard unit of measurement. Abbreviation: µ Updated August 14, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/micron Copy

Microphone

Microphone A microphone is a device that captures audio by converting sound waves into an electrical signal. This signal can be amplified as an analog signal or may be converted to a digital signal, which can be processed by a computer or other digital audio device. While all microphones (or "mics") serve the same basic function, they can capture audio in several different ways. Therefore, multiple classes of microphones exist. The three most common types are described below. Dynamic - Dynamic microphones are the most widely used microphones. They have a simple design that includes a magnet wrapped by a metal coil. A thin sheet called a diaphragm is placed on the front end of the magnet and transmits vibrations from sound waves to the coil. The coil then transfers these vibrations to electrical wires that transmit the sound as an electrical signal. Since dynamic microphones use a simple design, they are typically very durable and do not require electrical power. Condenser - Condenser microphones are commonly used for audio recording purposes. They are known for their sensitivity and flat frequency response. Each condenser microphone includes a front plate (the diaphragm) and a back plate that is parallel to the front plate. When sound waves hit the diaphragm, it vibrates and alters the distance between the two plates. This change is transmitted as an electrical signal. Unlike dynamic microphones, condensers require electrical power. This current may be provided by an internal battery, but is most often provided as 48 volt "phantom power" from an external preamp or mixing console. Ribbon - Ribbon microphones are also known for their high fidelity. They contain a thin ribbon made of a aluminum, duraluminum, or nanofilm, which is suspended in a magnetic field. Incoming sound waves make the ribbon vibrate, generating voltage proportional to the velocity of the vibration. This voltage is transmitted as an electrical signal. While early ribbon microphones required a transformer to increase the output voltage, modern ribbon mics have improved magnets that provide a stronger signal – in some cases even stronger than dynamic microphones. Though ribbon mics have been largely replaced by condensers, several models are still manufactured and used today. Not only do microphones come in several different classes, they also use several types of directional patterns to capture audio. Some microphones are designed with a single "polar pattern," while others have switches that allow you to select the appropriate pattern for a specific recording purpose. Some of the most common patterns include: Cardioid - a heart or bean-shaped pattern that captures audio from one direction; commonly used for recording vocals or a single instrument. Bidirectional - a figure 8 pattern that captures audio from two separate directions; may be used for recoding audio from two different sources or to capture reverb. Omnidirectional - a circular pattern that captures audio from all directions; often used to capture groups of vocalists or ambient sounds. NOTE: Microphones perform the opposite action of speakers, which convert electrical signals into sound waves. Updated June 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microphone Copy

Microsoft

Microsoft Microsoft is a US-based technology company. It was founded by Bill Gates and Paul Allen in 1975 and quickly grew to become the largest software company in the world. Today, Microsoft is still widely known for its software, but the company also develops hardware and provides a number of cloud services. Microsoft's most recognizable software titles include Windows and Office. Microsoft Windows is the world's most popular desktop operating system and Office is the most popular productivity suite. Below is a list of some of the software, hardware, and cloud services provided by Microsoft. Software Windows - operating system for desktop computers, laptops, and tablets Office - productivity suite including Word, Excel, PowerPoint, and Access Outlook - email and calendar software Visio - diagramming and vector drawing program Visual Studio - multi-platform software development IDE for building apps Skype - video conferencing application Halo - popular series of video games for Xbox Hardware Xbox - video game console first launched in 2001; followed by Xbox 360, Xbox One, and Xbox One X Surface - line of tablets and laptop computers HoloLens - head-mounted smartglasses designed for augmented reality Input devices - Microsoft branded keyboards and mice Services Bing - web search engine integrated into Windows 10 Outlook.com - free webmail service also used for Hotmail accounts OneDrive - cloud storage with web-based access Azure - cloud computing platform for hosting data and deploying applications Microsoft's headquarters is located in Redmond, Washington, but the company has corporate offices and data centers around the world. Microsoft Corporation's ticker symbol is MSFT. Updated July 18, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/microsoft Copy

Middleware

Middleware Middleware has two separate but related meanings. One is software that enables two separate programs to interact with each other. Another is a software layer inside a single application that allows different aspects of the program to work together. The most common type of middleware is software that enables two separate programs to communicate and share data. An example is software on a Web server that enables the HTTP server to interact with scripting engines like PHP or ASP when processing webpage data. Middleware also enables the Web server to access data from a database when loading content for a webpage. In each of these instances, the middleware runs quietly in the background, but serves as an important "glue" between the server applications. Middleware also helps different applications communicate over a computer network. It enables different protocols to work together by translating the information that is passed from one system to another. This type of middleware may be installed as a "Services-Oriented Architecture" (SOA) component on each system on the network. When data is sent between these systems, it is first processed by the middleware component, then output in a standard format that each system can understand. Middleware can also exist within a single application. For example, many 3D games use a "3D engine" that processes the polygons, textures, lighting, shading, and special effects in the game. 3D engines are considered middleware, since they bring different aspects of the game together. For example, the game's artificial intelligence works in conjunction with the 3D engine to create the gameplay. Game engine middleware includes a custom API, which provides developers with standard functions and commands used for controlling objects within the game. This simplifies game development by allowing programmers to use a library of prewritten functions rather than creating their own from scratch. It also means 3D engines can be used in more than one game. Updated February 3, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/middleware Copy

MIDI

MIDI Stands for "Musical Instrument Digital Interface." MIDI is a connectivity standard for transferring digital instrument data. It is primarily used by computers, synthesizers, and electronic keyboards. However, MIDI is supported by several other instruments, such as electronic drums, beat boxes, and even digital stringed instruments like guitars and violins. MIDI data includes several types of information. For example, pressing a single key on a synthesizer transmits the note played, the velocity (how hard the note is pressed), and how long the note is held. If multiple notes are played at once, the MIDI data is transmitted for all the notes simultaneously. Other data that may be sent over a MIDI connection includes the instrument ID, sustain pedal timings, and controller information, such as pitch bend and vibrato. When a synthesizer is hooked up to a computer via a MIDI connection, the notes played can be recorded by DAW software in MIDI format. The MIDI data can be played back by sending the recorded MIDI notes to the keyboard, which outputs them as audio samples, such as a piano or strings. Most DAW software supports MIDI editing, allowing you to adjust the timing and velocity of individual notes, change their pitch, delete notes, or add new ones. MIDI data is often displayed in a digital format, with lines representing each note played. Many programs can convert MIDI data to a musical score as well. A MIDI recording simply contains instrument information and the notes played. The actual sound is played back using samples (individual recordings) from real instruments. For example, a MIDI track recorded as a song for piano can be played back with a guitar sound by simply changing the output instrument — though it might not sound very realistic. Originally, MIDI connections used MIDI cables, which connected to a 5-pin MIDI port on each device. Now, most MIDI devices have standard computer interfaces, such as USB or Thunderbolt ports. These modern interfaces provide more bandwidth than traditional MIDI ports, allowing more tracks with more data to be transmitted at once. General MIDI The MIDI standard defines 128 "General MIDI" (or GM) instruments, which are available on most computers as software instruments. The sound of each General MIDI instrument may vary between different computers and keyboards because the samples used for the instruments may be different. However, the ID for a GM instrument is the same across devices. For example, GM instrument #1 is always an acoustic piano, #20 is a church organ, and #61 is a french horn. Modern synthesizers typically include hundreds or even thousands of other instruments that can be selected, most of which provide more authentic sound than the General MIDI options. Updated July 15, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/midi Copy

MIME Type

MIME Type MIME stands for "Multipurpose Internet Mail Extensions." A MIME type or "media type" is a text string that identifies a specific file format. It consists of two parts — a type and a subtype — separated by a forward slash. For example, the MIME type of a plain text file is: text/plain History SMTP, the standard protocol used to send email messages, was not originally designed to support file transfers. Email attachments had to be encoded as part of the message, limiting their size. MIME, which was introduced the 1990s, made it easier to send files via email using a custom Content-Type: header. Decades later, SMTP uses the same method. Email messages that contain attachments include a header that looks something like this: Content-Type: multipart/mixed; boundary ="[identifier]" Other Internet protocols soon adopted MIME types as a way to identify files transferred over the Internet. HTTP, for example, uses MIME types to identify the type of data accessible from a URL. A standard webpage has the following MIME type: Content-Type: text/html Since MIME types have many other uses besides email, they are also called media types. MIME Type Examples MIME types, are used to identify thousands different file formats. The official list is maintained by the IANA (Internet Assigned Numbers Authority). Some examples include: text/html - HTML document (webpage) text/css - CSS file text/javascript - JavaScript file image/jpeg - JPEG image file image/png - PNG image file audio/wav - WAVE audio file video/mp4 - MPEG video file application/zip - ZIP compressed archive MIME Types vs File Extensions MIME types and file extensions are similar in that they both identify file formats. However, MIME types provide two advantages: They contain a type and subtype. The type (preceding the forward slash) helps categorize file types, making it easy for applications to filter files. For example, an Internet program may allow the transfer of text and image files, but not audio or video files. They have a one-to-one relationship with file formats. Some file formats have multiple file extensions. For example, a JPEG image may have a .JPG or .JPEG extension. A webpage may have an .HTM or .HTML extension. MIME types provide standard identifiers for file formats that are recognized across applications and different protocols. Updated April 16, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mime_type Copy

Mini DV

Mini DV Most digital camcorders record video and audio on a Mini DV tape. The cassettes measure 2.6 x 1.9 x 0.5 inches (L x W x H), while the tape itself is only 0.25 inches thick. A Mini DV tape that is 65 meters long can hold an incredible 11GB of data, or 80 minutes of digital video. The small size of Mini DV tapes has helped camcorder manufacturers reduce the size of their video cameras significantly. Some consumer cameras that use Mini DV tapes are smaller than the size of your hand. Because Mini DV tapes store data digitally, the footage can be exported directly to a computer using a Firewire (IEEE 1394) cable. So if you want to record video and edit it on your computer, avoid the SVHS and Hi-8 options and make sure to get a camera that uses Mini DV. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/minidv Copy

Minicomputer

Minicomputer While a minicomputer sounds like a small computer, the name can be a bit misleading. In fact, minicomputers are several times the size of desktop PCs and are only one step below mainframes in the hierarchy of computer classes. The term "minicomputer" was introduced in the 1960s to describe powerful computers that were not as large as mainframes, which sometimes could fill an entire room. Instead, most minicomputers were a few feet wide and several feet tall. They were primarily used by large businesses during the 1960s and 1970s to process large amounts of data. Some minicomputers also functioned as servers, allowing multiple users to access them from connected terminals. As computer processors became smaller and more powerful, microcomputers began to rival minicomputers in processing power. Therefore, in the 1980s, minicomputers started becoming less relevant and eventually became obsolete. Today, rack-based servers perform similar functions to minicomputers. Updated April 11, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/minicomputer Copy

Minification

Minification In computer science, minification is the process of removing unnecessary elements and rewriting code to reduce file size. It is commonly done to web page resources, such as HTML, CSS, and JavaScript files. Reducing the size of web resources allows the files to be transferred more quickly, making web pages load faster. There are several ways to minify data. The most basic is to remove comments, unnecessary spaces, and line breaks (newline characters). While comments and whitespace help make the code more readable, they are ignored by the browser. Therefore these elements can be safely removed before publishing. Another method is to minimize the code required for each statement. In CSS, this is often accomplished by converting longhand CSS to shorthand CSS. For example, a margin definition might take seven lines in longhand, but only one line in shorthand. In JavaScript, long variable names can be replaced with shorter ones (often a single character. Below is an example of CSS code before and after minification. Note how the comments, spaces, line breaks, and unnecessary semicolons are removed. The code is also converted from longhand CSS to shorthand. Standard CSS code img.left   /* float left 400px image */ {     float: left;     max-width: 400px;     margin-top: 8px;     margin-right: 30px;     margin-bottom: 12px;     margin-left: 0px; } Minified CSS code img.left{float:left;max-width:400px;margin:8px 30px 12px 0} Advanced minification algorithms can reduce file size even further. A CSS minifier, for instance, may find and remove duplicate lines within a CSS file. It may also combine similar CSS definitions into a single statement. A JS minifier may actually rewrite JavaScript functions to be more efficient. Minifying code often saves only a few kilobytes. For example, a standard CSS file may be 50KB and the minified version may be 40KB. However, when improving page load speed, every kilobyte matters. The goal of a good minifier is to reduce file size as much as possible with zero impact on the way the code is parsed or processed. Regardless of what minifier is used, developers typically maintain a non-minified version of the code for future editing. Minification vs Compression While minification and file compression both reduce file size, they are not identical. Minification simply alters the text while file compression completely rewrites the binary code within a file. A compressed file must be decompressed by file decompression utility in order to be read as a text file. Many websites use a combination of minification and "gzip" file compression to reduce the size of web resources as much as possible. Updated January 22, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/minification Copy

Minimize

Minimize When you minimize a window, you hide it from view. This is commonly done to unclutter the display or to view other open windows without closing the current window. In Windows, minimizing a window will create a button for it in the taskbar. In Mac OS X, an icon for the minimized window is added to the right size of the dock. To minimize a window in Windows, click the button with the icon of a horizontal line, located in the upper-right corner of the window. This will hide the window and create a corresponding button for it in the taskbar. To open the window again, simply click the button in the taskbar. To minimize a window in Mac OS X, click the yellow button in the upper-left corner of the window or double-click the title bar. This will shrink the window into an icon that is stored in the dock. Like Windows, clicking the icon will open the window again. One fun thing to try is to hold the Shift key while clicking the minimize button. Instead of shrinking immediately, the window will contract into the dock in slow motion. The minimize button is located next to the maximize button on both Macintosh and Windows computers. Updated April 11, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/minimize Copy

Minisite

Minisite A minisite, sometimes called a microsite, is a small website dedicated to a specific topic. Most minisites contain around five pages, though a minisite may be as small as a single webpage or as large as 20 pages. Minisites are often related to larger websites. For example, a sporting goods website may have several associated minisites. Each of these small websites would be dedicated to a specific related topic, such as running shoes, soccer clothing, or hockey equipment. The purpose of these minisites would be to drive traffic to the larger sporting goods website. While minisites are created for a number of reasons, the most common is e-commerce. For example, affiliates often build minisites that drive traffic to large retailers in order to generate commissions. Some online retailers may even create their own minisites to broaden their reach. If a publisher is able to get several sites ranked highly in search engines, it may result in more traffic and therefore more sales. While creating minisites may seem like a good strategy for increasing Web traffic, small websites with limited content typically do not rank well in search engines. Therefore, many companies have found that it is more productive to add content to the main site rather than spread it out across multiple sites. NOTE: "Microsite" and "minisite" are often used synonymously. However, a microsite may refer specifically to a minisite that only contains one page. Updated January 23, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/minisite Copy

MIPS

MIPS Stands for "Million Instructions Per Second." MIPS is a unit of measurement that indicates the raw processing speed of a CPU. It quantifies how many millions of instructions a processor can execute in one second. While MIPS is a straightforward, objective measurement of computing power, it is too simplistic to measure the actual performance of a processor. Factors like processor architecture, memory bandwidth, and I/O speed all affect the computational power of a CPU. For instance, a processor rated at 400,000 MIPS might outperform another rated at 500,000 MIPS in certain tasks due to differences in architecture and efficiency. As a result, MIPS alone is generally not a reliable indicator of processor performance. MIPS vs Clock Speed Modern processors are typically rated by clock speed rather than MIPS. Clock speed, which measures processing cycles per second, provides a better gauge of CPU power. However, some processors use fewer instructions than others to complete the same calculations, so clock speed is not a reliable performance metric. The best way to measure processing performance is not in MIPS or megahertz but with benchmark tests, which measure actual calculations. Updated July 26, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mips Copy

Mirror

Mirror In computing, a mirror is a server that provides an exact copy of data from another server. This may be one or more files, a database, a website, or an entire server. Mirroring is designed to provide fault tolerance, or a means of redundancy in case something goes wrong with the primary or "principal" server. If principal server unexpectedly goes offline, for example, the mirror server can take over. This process may be performed automatically, but it requires a third system, called a "witness" server. This machine monitors both servers and transfers all traffic to the mirror if an outage is detected with the principal server. Server mirroring can also be used for planned maintenance, such as upgrading a server or running software updates that require services to be stopped or restarted. In these instances, a server admin can manually set the mirror as the principal server to avoid downtime. Some mirroring setups allow "role switching," in which the principal and mirror servers can be swapped at any time. Another type of mirroring – FTP mirroring – simply provides one or more files from multiple servers. For example, a download page may list several "mirror" URLs, which all offer the same file. The mirrors are typically listed by geographical location, so you can select the one closest to you. While FTP mirrors still exist, they have faded in popularity as automatic location detection and content delivery networks have automated this process. Updated August 17, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mirror Copy

Mirrored Volume

Mirrored Volume A mirrored volume is a volume on a storage drive that contains an exact copy of the data on another drive. The most common reason to use a mirrored volume is to provide fault tolerance in case of hardware failure, allowing the mirror to serve as a backup. A mirrored volume is typically implemented through a RAID 1 array, although other software may offer the same functionality. Mirroring a storage volume can improve read speeds when accessing data since the operating system can read files from both drives simultaneously. However, write speeds are not improved since each disk has to write its own copy of all data. Mirroring a volume also does not increase your computer's total storage space, as both drives are considered a single volume with a capacity equal to the smallest drive in the array. Using two drives in a mirrored array drastically reduces the odds of data loss due to hardware failure. For example, the odds of a single hard drive failing over a 12-month period is about 1%, but the odds of both drives in a mirrored array dying within that year are 0.01% (or 1 in 10,000). If one drive in a mirrored array experiences a hardware failure, all you need to do is replace it before the other fails. At that point, the contents of the remaining drive copy over to the replacement. A pair of drives in a two-way mirrored configuration in Windows 11 You can create a mirrored storage volume using a hardware RAID controller or a software utility. Most operating systems include a feature to create mirrored storage volumes. Windows uses a feature called Storage Spaces to create RAID-like storage pools across multiple disks; its "two-way mirror" setting can create a mirrored volume across two storage drives. The macOS Disk Utility can create a software RAID array, as can Linux tools like mdadm. NOTE: Cloning a drive and mirroring a drive both create two copies of your data, but they serve different purposes. Cloning a drive makes a one-time snapshot of it at a specific moment, while mirroring a volume keeps both drives in sync at all times. Updated October 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mirrored_volume Copy

MIS

MIS Stands for "Management Information System." An MIS is a system designed to manage information within a company or organization. This includes employees, departments, projects, clients, finances, and other types of data. At its most general level, an MIS may include non-computer based elements, such as the structural hierarchy of an organization. However, in the computing world, an MIS typically refers to the hardware and software used to manage information. The hardware required for a management information system can vary widely depending on the size and data processing requirements of an organization. A small business, for example, may only need a single machine to store information, such as employee data, projects, and invoices. A large business may require several systems that allow employees to share data securely across multiple locations. Data stored in an MIS is often backed up in multiple locations for redundancy. Of course, hardware must have the appropriate software installed to assist in a management information system. Some common functions of MIS software include employee record keeping, invoicing, inventory management, project planning, customer relationship management, and business analysis. Some software programs are designed for specific tasks, such as maintaining financial records or backing up data. Other programs are designed to be comprehensive solutions that perform multiple MIS functions in a single software package. Examples of MIS software include Microsoft Dynamics, Fleetmatics WORK, Clarity Professional MIS, and Tharstern Limited. MIS programs designed specifically for the graphics and print industry include Avanti Slingshot, EFI Pace, and DDS Accura. Most MIS software programs are available as desktop applications, though many solutions now include web-based interfaces and mobile apps as well. Updated July 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mis Copy

Mixed Reality

Mixed Reality Mixed reality is a blend of physical and virtual worlds that includes both real and computer-generated objects. The two worlds are "mixed" together to create a realistic environment. A user can navigate this environment and interact with both real and virtual objects. Mixed reality (MR) combines aspects of virtual reality (VR) and augmented reality (AR). It sometimes called "enhanced" AR since it is similar to AR technology, but provides more physical interaction. Mixed Reality vs Virtual Reality Virtual reality provides a virtual experience that is completely digitally created. A user can enter a virtual world by putting on a VR headset that includes audio and video. As the user moves his head, he can look around the three-dimensional space. A VR headset may also come with gloves or handheld sticks that can be used to interact in the virtual environment. Unlike virtual reality, mixed reality starts with a physical space and places virtual objects within it. A user can interact with these objects without wearing special gloves. A VR headset is totally black when it is turned off, while an MR headset is see-through like a pair of glasses. Mixed Reality vs Augmented Reality Mixed reality is closer to augmented reality than virtual reality. Like MR, AR uses a real environment. It "augments" reality by overlaying virtual objects on top of it, such as a Snapchat filter. However, the user cannot interact directly with virtual objects. For example, AR has the capability to display a virtual 3D box on a physical table. With MR, the user might be able to pick up and open the box. AR only requires a handheld device that has a camera and screen, such as a smartphone. Since MR enables the user to interact directly with the 3D environment, it requires a head-mounted display, such as Microsoft's HoloLens. Mixed Reality has several applications, such as virtual training for police, remote guidance for construction workers, and virtual testing for engineers. While still in the early stages, MR can also be combined with robotic technology. This could allow a surgeon to perform surgeries remotely. Updated March 15, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mixed_reality Copy

MLA

MLA

MMS

MMS Stands for "Multimedia Messaging Service." MMS is mobile phone service that allows users to send multimedia messages to each other. This includes images, videos, and sound files. MMS is an extension of SMS, which is used to send and receive text messages. Like text messages, multimedia messages are first transmitted to a central server maintained by the cellular service provider. Once the message has been received by the server, it is forwarded to the recipient. If the recipient's phone is off or she does not have cell phone service when the message is sent, the server will hold the message and send it once the recipient's phone is available. Most modern cell phones and smartphones support MMS messaging. MMS support is typically integrated into the text messaging interface and activates automatically when needed. For example, if you type a text-only message, it will be sent using SMS. If you add a graphic or video, the multimedia portion will be transmitted via MMS. Similarly, if someone sends you an multimedia message, your phone will automatically use MMS to receive the file. If your phone does not support MMS messages, you will most likely receive a text message that includes a URL where you can view the file from a Web browser. Updated August 25, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mms Copy

Mnemonic

Mnemonic A mnemonic (pronounced "nemonic") is a pattern that can be used as an aid for memorizing information. Most often, this pattern consists of letters or words. For example, the phrase "Every Good Boy Does Fine" can be used to help music students remember the notes of the staff, E, G, B, D, and F. The name "Roy G. Biv" is often used to memorize the order of colors in a rainbow (or other light spectrum) -- Red, Orange, Yellow, Green, Blue, Indigo, Violet. While initials of words are commonly used as mnemonic devices, rhyming words and poems can also be used to memorize information. Furthermore, images can be associated with words or phrases to help memorize them. Because the human brain organizes information in "chunks," mnemonics help people categorize information better, which makes it easier to remember. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mnemonic Copy

Mobile

Mobile The word "mobile" has a storied history in the computer world. The term was initially used in the 1980s to describe computers that you could take with you and use while you were on-the-go. These devices eventually became small and light enough to fit on your lap and were known as laptops. For many years, "mobile" differentiated between desktop computers and their portable counterparts. Then came the smartphone. Spurred by the release of the first iPhone in 2007, mobile phones evolved into small computers that could surf the web, send email, and run different apps. The IT community started using the word "mobile" to describe cell phones as well as computers. Next came tablets. These lightweight touchscreen devices, which are even more portable than laptops, introduced a new mobile category between laptops and smartphones. "Mobile" evolved into an umbrella term that now describes laptops, tablets, and smartphones. The Different Meanings of Mobile When used in different contexts, such as "mobile development," "designing for mobile," and "mobile-friendly," the term "mobile" can have different meanings. For example, "mobile development" typically refers to creating touch-based apps for tablets and smartphones, but not laptops. "Designing for mobile" might refer to responsive web design that supports mobile devices with smaller screens, such as tablets and smartphones. "Mobile-friendly" refers to websites that are easy to use on smartphones. In this case, "mobile" is used synonymously with "smartphone," but not necessarily tablet. This smartphone-specific definition of mobile has become common in web analytics. For example, Google Analytics separates traffic by device into Desktop, Tablet, and Mobile categories. "Desktop" refers to both desktop and laptop computers, while "Mobile" refers specifically to smartphones. Updated August 13, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mobile Copy

MoCA

MoCA Stands for "Multimedia over Coax Alliance." MoCA is a technology that uses a building's existing coax cable television wiring as part of a computer network, instead of (or in addition to) standard Ethernet wiring. This provides the benefit of a wired network without the need to run new Ethernet cables. MoCA networks can handle speeds up to 2.5 Gbps, depending on the adapters used. One common use case for a MoCA network is in a home that has coax run to multiple rooms for cable television, but no Ethernet. For example, if the modem and router are in the basement, the Wi-Fi signal may struggle to reach the bedrooms on the second story. Creating a MoCA network over the existing coax wiring can provide an Ethernet connection in any other room with a coax wall outlet for a wireless access point, computer, game console or other device. MoCA adapter with coax and Ethernet connections In order to use coax wiring as a MoCA network, you will need at least two MoCA adapters. Some networking routers include a built-in MoCA adapter, but you can find dedicated MoCA adapters from a number of manufacturers. Start by connecting one MoCA adapter to your router via an Ethernet cable, to a coax wall outlet via a coax cable, and to power. Connect another adapter to a coax wall outlet in another room, to a computer or other device via Ethernet, and to power, and the two adapters will use the coax network to bridge the Ethernet computer network seamlessly (although some MoCA adapters may require some extra software configuration). NOTE: If you do run a MoCA network over your home's coax wiring, you will want to make sure that your internal network traffic does not leak out to the cable connections for your neighborhood. A Point of Entry (POE) filter on the coax wiring entering your house will filter out the frequencies used for MoCA traffic while allowing cable television and Internet traffic to proceed normally. Updated September 19, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/moca Copy

Mod Function

Mod Function A mod function divides two numbers and returns the remainder. It requires the same input values as a division operation but outputs the "leftover" value instead of the division result. For example: 7 mod 3 = 1 8 mod 3 = 2 If there is no remainder, the result is zero. 9 mod 3 = 0 In mathematical formulas, the mod or "modulo" operator is often represented with a percentage sign. For example, 7 mod 3 can be written as the following expression: 7 % 3 = 1 In computer science, a mod function may be written as mod(x,y), or something similar, depending on the programming language. For example: mod(7,3) = 1 The function above divides the first value (dividend) by the second value (divisor) and returns the remainder. Because 7 ÷ 3 = 2r1, the result of mod(7,3) is 1. Mod Function Applications The mod function has several uses in computer programming. Since the result must be less than the divisor, it can limit results to a specific range. For example mod(x,4) can only return 0, 1, 2, or 3, assuming x is an integer. It is useful for checking every nth value or categorizing results into a limited number of "buckets." In most programming languages, 0 evaluates as false, and other numbers evaluate as true. Therefore, a mod function can provide a boolean result — false if two numbers are evenly divisible and true if they are not. Below is an example of a mod function within an if statement: if (mod(x,5)) then { ... } If x is not evenly divisible by 5, the code in the then clause will execute. If x is divisible by five — 5, 10, 15, 20, etc — the code will not. NOTE: Mod functions often have two integers as arguments. However, floating point numbers are also acceptable. Two integer values always produce an integer, but one or more fractional inputs may output a fractional result. For instance, 8 % 2.5 = 0.5 since 8 ÷ 2.5 = 3r0.5. Updated May 24, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mod_function Copy

Modem

Modem Modem is short for "Modulator-Demodulator." It is a hardware component that allows a computer or another device, such as a router or switch, to connect to the Internet. It converts or "modulates" an analog signal from a telephone or cable wire to digital data (1s and 0s) that a computer can recognize. Similarly, it converts digital data from a computer or other device into an analog signal that can be sent over standard telephone lines. The first modems were "dial-up," meaning they had to dial a phone number to connect to an ISP. These modems operated over standard analog phone lines and used the same frequencies as telephone calls, which limited their maximum data transfer rate to 56 Kbps. Dial-up modems also required full use of the local telephone line, meaning voice calls would interrupt the Internet connection. Modern modems are typically DSL or cable modems, which are considered "broadband" devices. DSL modems operate over standard telephone lines, but use a wider frequency range. This allows for higher data transfer rates than dial-up modems and enables them to not interfere with phone calls. Cable modems send and receive data over standard cable television lines, which are typically coaxial cables. Most modern cable modems support DOCSIS (Data Over Cable Service Interface Specification), which provides an efficient way of transmitting TV, cable Internet, and digital phone signals over the same cable line. NOTE: Since a modem converts analog signals to digital and vice versa, it may be considered an ADC or DAC. Modems are not needed for fiber optic connections because the signals are transmitted digitally from beginning to end. Updated September 2, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/modem Copy

Modifier Key

Modifier Key A modifier key modifies the action of another key when the keys are pressed at the same time. Common modifier keys include Shift, Function, Control, Alt, Command, and Option. The Shift key is found on all keyboards, while the other keys may be exclusive to laptops or Windows or Macintosh computers. Below is a list of the common modifier keys and their primary uses. Shift - used for capitalizing letters and entering different types of symbols. Function (Fn) - commonly found on laptop computers and used to enter the F1 - F15 commands if the function keys also have other functions (such as brightness and volume controls). Control (Ctrl) - a Windows key used for entering keyboard shortcuts, such as Ctrl+S for saving a file or Ctrl+P for printing a document. Alt - a Windows key that is also called the Alternate or Alternative key; used in combination with the numeric keypad for entering "Alt codes," which output special characters; may also be used in combination with the Control key for entering keyboard shortcuts. Command (Cmd) - a Macintosh key (similar to the Windows Control key) used for entering keyboard shortcuts; may have a four leaf clover or Apple icon printed on the key. Option - A Macintosh key (similar to the Windows Alt key) used for entering special characters; may also be used in combination with the Command key for entering keyboard shortcuts. To enter a key combination that requires a modifier key, first press the modifier key, then the other key in the combination. For example, to save a document using the common "Ctrl+S" shortcut, first press and hold the Control key. Then press and release the "S" key to perform the command. Finally, release the Ctrl key. While most key combinations require only one modifier key, some require multiple modifier keys. For example, in Mac OS X, Cmd+Shift+3 takes a screenshot of the screen. In this case, either modifier key can be pressed first, as long as both keys are held when the "3" key is pressed. Updated April 2, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/modifierkey Copy

Modulation

Modulation Modulation is a process that takes a simple signal (like a sine wave) and encodes another signal within it to transmit information. It allows analog and digital data to travel across radio waves, telephone wires, and coaxial cable lines. Devices that modulate and demodulate signals to transmit data over the Internet are called modems. First, a circuit called a modulator receives an incoming signal, known as a modulation signal, containing some data to transmit. This modulation signal can be analog data, like an audio signal from a microphone, or a digital signal called a bitstream. The modulator merges the modulation signal with a higher-frequency signal, called a carrier signal or carrier wave, by altering its amplitude, frequency, or phase. That carrier wave travels over a transmission medium like radio waves or transmission wire. A demodulator on the other end receives the signal and applies a reverse process to separate the modulation signal from the carrier signal. Analog Modulation The most common use for analog modulation is transmitting radio signals over long distances. It used to be the primary method for sending other signals like television and voice communication, but those signals now use digital modulation instead. There are three primary methods used to modulate analog signals: Amplitude Modulation (AM) modulates a signal by making the amplitude of the carrier wave stronger or weaker to carry the modulation signal while the carrier wave's frequency remains constant. Frequency Modulation (FM) modulates a signal by keeping the amplitude constant but changing the carrier wave's frequency based on the modulation signal. FM is less susceptible to interference than AM. Phase Modulation (PM) modulates a signal by changing the carrier wave's phase based on the amplitude of the modulation signal. Both the amplitude and frequency of the carrier wave remain constant. Digital Modulation Digital modulation can transmit digital data bitstream over analog signals, like over phone or cable lines (using modems) or digital radio signals (Wi-Fi, Bluetooth, or cellular). Instead of modulating a carrier wave to carry another signal, digital modulation modulates the carrier wave to transmit a bitstream. The most basic form of digital modulation sends one bit at a time by switching the signal between two states: on (1) and off (0). The number of times that a modulated signal can change per second is known as the baud rate. More advanced methods use a modulation alphabet to transmit data more efficiently. A modulation alphabet can encode multiple bits into a single tone. For example, one method may encode groups of 2 bits using four distinct tones — one tone for 00, another for 01, a third for 10, and the fourth for 11. A method that encodes 4 bits at a time would instead use 16 different tones. There are many types of digital modulation, but the most common types are as follows: Phase-Shift Keying (PSK) modulates a signal by changing the phase of the carrier wave between a finite set. Frequency-Shift Keying (FSK) modulates a signal by changing the carrier wave's frequency between a finite set of frequencies. Audio Frequency-Shift Keying (AFSK) is a digital modulation method used in early computer modems. Amplitude-Shift Keying (ASK) modulates a signal by changing the amplitude of the carrier wave between a finite set of amplitudes. ASK modulation transmits digital data over fiber optic lines by changing the amplitude of the light sent over the cables. Quadrature Amplitude Modulation (QAM) modulates a signal by changing the amplitude of two separate carrier waves that are out of phase with each other. QAM is part of the Wi-Fi communications specification. Updated November 3, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/modulation Copy

Mojave

Mojave Mojave (pronounced "mo-ha-vee") is the name of macOS 10.14, the fifteenth version of macOS (previously OS X). It was released on September 24th, 2018 and follows macOS 10.13, also known as High Sierra. macOS Mojave is a minor update to macOS but includes a few notable new features. One of the most significant is Dark Mode, which makes the menu bar and window backgrounds dark. It also changes the light grey and white backgrounds of supported applications to dark grey with white text. Dark mode was inspired by the dark interface of software development IDEs used by programmers and is intended to reduce eye strain. Other UI improvements in Mojave include "time-shifting" desktops that change color during the day, a new Gallery View in the Finder, and Stacks, which automatically groups similar file types together. Mojave also includes new document editing capabilities at the operating system level. For example, the updated Quick Look feature allows you to mark up a PDF or edit an image without even opening it with an application. You can also edit and share screenshots immediately after taking them. The screen capture feature in Mojave is the first to support screen recording. macOS Mojave is the first version of macOS to include support for iOS apps. While only a handful of Apple's own iOS apps were supported by Mojave at the time of launch, Apple plans to allow third party developers to build crossplatform apps that run on both iOS and macOS in the future. Updated December 26, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mojave Copy

Molex Connector

Molex Connector A Molex connector is a common type of connector used to power internal computer components. The name comes from the Molex Connector Company, which pioneered the two-piece electrical connectors that became standard in computers and other electronics. Since Molex makes dozens of different connectors, there is no single connector technically called a "Molex connector." However, the term "Molex connector" has become a generic way to describe the Molex 4-pin 8981 power connector used to power hard drives, optical drives, and other storage devices. It contains four pins (colored yellow, black, black, and red) encased in a white plastic connector. The connector snaps into a receptacle attached to the power supply and locks into place, ensuring a reliable connection. The 8981 connector's simple and inexpensive design helped it become the standard power connector in desktop computers for several decades. While the Molex 8981 connector is still used in many PCs, modern storage devices, such as SSDs and SATA drives often use newer types of connectors. However, other products from the Molex family are still used in many electronic devices. These connectors are not generically referred to as "Molex connectors," but instead have proprietary names. Examples include the "Mini-Fit Jr." motherboard connector and the "PanelMate" connector used for powering flat panel displays. Updated July 8, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/molex_connector Copy

Monitor

Monitor The term "monitor" in a computer context may refer to several different things. It is most often used synonymously with "computer screen" or "display." It may also refer to speakers for professionally monitoring sound, called "studio monitors." The term may be used as a verb, referring to monitoring computer performance or network activity. Computer Screens A computer's monitor displays its user interface, desktop, and open programs; it is the primary output device you interact with. Some computers have integrated monitors as part of the device, like a laptop or all-in-one computer. In other cases, the monitor is a separate device that plugs into one of the computer's ports — either VGA, HDMI, DVI, or DisplayPort. A computer may have several ports to support multiple monitors at once. Most computer monitors are LCD screens designed specifically as computer monitors, but it's also common to use LCD and OLED televisions as computer monitors. Before LCD screens were available and affordable, computers used bulky CRT monitors. Studio Monitors In order to properly mix and master audio, professional audio engineers require extremely accurate speakers. Speakers designed for this purpose are called "studio monitors." Unlike consumer-focused speakers, which are often tuned to emphasize low-end and high-end frequencies, studio monitors replicate the audio frequencies as accurately as possible. Studio monitors are also designed for a short distance between the listener and speaker, as they are often placed next to a workstation only a few feet from the audio engineer, while speakers designed for home audio are meant to fill the whole room with sound. Information Monitoring When used as a verb, "monitor" refers to the practice of watching the flow of information. System monitoring utilities help a computer user track how much of their computer's processing power, memory, storage capacity, and network bandwidth is in use; it also helps track what programs and processes are using those resources. Regularly monitoring system activity can help spot unusual activity and troubleshoot problems. Updated December 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/monitor Copy

Monochrome

Monochrome The word "monochrome" literally means "one color." Therefore, a monochrome image only includes one color, but may contain many shades. In computing, "monochrome" typically refers to a two-tone image, rather than one with several shades of a single color. For example, a monochrome monitor uses one color for the background and another to display text or images on the screen. Before color monitors became standard, most computers had monochrome displays. These displays often had a black background with green text, though some displayed text in other colors, such as red or orange. While this may seem like a rudimentary way to display text, it was sufficient for typing documents, since computers still offered more text editing capabilities that a typewriter. Even after color monitors became the norm in the 1980s, monochrome displays were still used for several years as computer terminals. Today, monochrome computer monitors are rare. By the time LCD displays replaced CRTs monitors in the early 2000s, monochrome screens had already been obsolete for several years. Now, even basic terminal displays support a wide range of colors. While you might not see monochrome monitors today, monochrome displays can still be found in other electronics, such as watches, timers, and digital clocks. NOTE: Monochrome is not the same thing as grayscale. A grayscale image is a type of monochrome image that only contains shades of gray. Additionally, monochrome and black-and-white are two different things. The phrase "black-and-white" may refer to a monochrome image that only includes the colors black and white or a grayscale image with multiple shades of gray. Updated April 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/monochrome Copy

Monterey

Monterey macOS Monterey (macOS 12) is the 18th version of Apple's desktop operating system. It follows macOS Big Sur (macOS 11) and was released on October 25, 2021. Monterey is a major macOS release (version 12.0), but contains fewer significant updates than its predecessor Big Sur. While macOS 11 included major changes to the user interface, macOS 12 maintains the same look and feel and nearly all the same icons. The most notable changes in Monterey are improvements to Apple-branded apps. Examples of app updates in Monterey include: FaceTime: adds "SharePlay," which supports screen sharing and allows mulitple people to watch videos together Photos: improves the Memories feature with a new look, an interactive interface, and more memory types Notes: adds tags and Smart Folders for categorizing notes Shortcuts: brings the iOS app to macOS, which integrates with Automator Maps: provides new transit information and more detailed information for large cities Some OS-level updates in macOS Monterey include: Focus: enables custom notifications while working as an alternative to Do Not Disturb Universal Control: allows the same keyboard, mouse, and trackpad to be used with a Mac and iPad Accessibility: includes "Full Keyboard Access," which makes it possible to control everything on a Mac using only a keyboard Translation: adds system-wide translation to text via right-click → Translate macOS Monterey supports MacBook Pros, MacBook Airs, and iMacs released in 2015 and later, Mac minis released in 2014 and later, and Mac Pros released in 2013 and later. Updated November 26, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/monterey Copy

Moodle

Moodle Moodle is an acronym for "Modular Object-Oriented Dynamic Learning Environment." It is an online educational platform that provides custom learning environments for students. Educators can use Moodle to create lessons, manage courses, and interact with teachers and students. Students can use Moodle to review the class calendar, submit assignments, take quizzes, and interact with their classmates. Moodle is used by thousands of educational institutions around the world to provide an organized and central interface for e-learning. Teachers and class administrators can create and manage virtual classrooms, in which students can access videos, documents, and tests. Course chat allows students to communicate with the teacher and other students in a secure environment. Each Moodle classroom and course can be customized by the class administrator. For example, one teacher may choose to provide a wiki that students can edit, while another may opt to use a private web forum for online discussions. Some teachers may use Moodle to simply provide documents to students, while others may use it as the primary interface for quizzes and tests. Individual class sizes can be scaled from a handful of students to millions of users. In order to create a Moodle learning environment, the Moodle software must be downloaded and installed on a web server. The Moodle platform is open source and is built using a modular design, so advanced users can modify the platform as needed. Individual users, such as teachers and students, can sign up for an account on the Moodle server and access content through either the web interface or the "Moodle Desktop" application. Updated March 3, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/moodle Copy

Motherboard

Motherboard A motherboard, also known as a "mainboard" or "logic board," is the primary circuit board in a computer that connects and controls the rest of its components. Every other piece of hardware in a computer ultimately connects to the motherboard, which serves as a hub that provides a path for the components to communicate with each other. A computer's motherboard includes several important components, like the computer's UEFI (or BIOS in older systems). This special ROM chip contains instructions used during system startup and other configuration settings. Motherboards also provide the slots and sockets for installing the CPU, RAM modules, and PCI Express expansion cards. Motherboards include integrated controller chips for networking, sound, USB, and data storage. They also provide the physical connections for these controllers, like USB ports, Ethernet ports, SATA ports, and M.2 slots. Motherboards provide data buses between these components to communicate back and forth. They also deliver the electricity from the computer's power supply to these internal components. Computer motherboards come in a few standard form factors, ensuring that any standard-size motherboard fits in a corresponding computer case. These standards specify the placement of mounting screw holes, the location of expansion slots and I/O ports, the type of power supply, and the connection with the case's buttons and ports. The full-size ATX standard, and the smaller microATX variant, are the two most common. NOTE: The chipsets built into a motherboard typically only support a specific series of CPUs. When CPU makers design a new series of processors, they also design the chipsets to support that processor's new features. This means that upgrading a computer's CPU generally requires a new motherboard as well. Updated February 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/motherboard Copy

Motion Tween

Motion Tween A motion tween is a feature available in Adobe Flash (formerly Macromedia Flash) that allows you to easily animate the motion of an object. Instead of defining the location of the object in every frame, you can create a motion tween, which will automatically move the object from the beginning location to ending location. To create a motion tween, simply select a layer in the timeline and drag an object onto the stage. Then select the number of frames in the timeline you would to use for the duration of the animation. To create the motion tween, you can either right-click in the timeline and select "Create Motion Tween," or simply choose Insert → Motion Tween from the menu bar. NOTE: In order for Flash to create the tween, you may need to convert the object to a symbol. Once the tween has been created, you can click on any frame within the motion tween and move or rotate the object. For example, you can click on the last frame in the motion tween and move the object to a different part of the stage. When you run the animation, Flash will automatically calculate the location of the symbol for each frame and smoothly move the symbol from the start location to the end location. You can modify the acceleration of the object using the "Ease" property in the Properties palette. Motion tweening has become the standard way of animating symbols in Flash animations. While the name "motion tween" is specific to Adobe Flash, the phrase is sometimes used to refer to automated movements in other animation software as well. Updated December 7, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/motion_tween Copy

Mount

Mount Mounting is the software process of activating a volume on a disk drive to make its files and folders accessible to a computer's operating system. It is the second step (after physically plugging it in) before a computer can access the disk's contents. Most operating systems mount a disk automatically as it's connected, but in some cases, you may need to mount a disk manually through a software command. When Windows mounts a disk, it assigns it a unique drive letter. This letter identifies the disk in the file system, allowing the File Explorer and other applications to access its contents by navigating to that drive and following its folder structure. Unix-based operating systems like macOS and Linux instead mount new volumes as special directories within the hierarchal file system. For example, connecting a USB drive to a Windows computer mounts it to a drive letter like E:, connecting it to a Mac mounts it in the /Volumes/ directory, and connecting it to a Linux computer mounts it in the /media/ directory (although a Linux user can decide to mount it elsewhere through the mount command). You can then use any file manager to navigate to that location to view the disk's contents. An external hard drive mounted in macOS, with its location in the file system highlighted In addition to mounting physical disks, you can mount disk image files. Most operating systems include a built-in way to mount disk image files within their file manager. Once mounted, you can access the contents of a disk image just as you would the contents of a physical disk. The opposite of mounting a disk is unmounting it, which removes it from the computer's file system and prevents the operating system from accessing its contents. You should always unmount a storage drive before disconnecting it — unexpectedly disconnecting a disk while it is involved in a data transfer may corrupt some of that data. Optical discs are automatically unmounted when ejected from the drive, but you should remember to unmount Flash drives and external hard drives before unplugging them. Updated July 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mount Copy

Mountain Lion

Mountain Lion Mountain Lion is another name for Mac OS X 10.8, the ninth version of Apple's desktop operating system. It was released on July 25, 2012, almost exactly one year after Mac OS X 10.7 Lion. Like Snow Leopard, Mountain Lion was not a major update to the Mac OS. Instead, it was primarily a performance update and included a small number of enhancements to further integrate Mac OS X with iOS, Apple's mobile operating system. Unlike previous versions of Mac OS X, Apple officially labeled Mountain Lion "OS X" (removing "Mac" from the name). This change, while symbolic, represents Apple's effort to integrate their desktop computers and mobile devices into a more cohesive environment. For example, Mountain Lion allows Macs to share reminders, notes, and documents with iOS devices using the cloud. Notification Center, introduced with OS X 10.8, provides many of the same updates as the Notifications feature in iOS. The new Messages application can send and receive iMessages, which allows Mac users to communicate directly with people using iPhones and iPads. OS X Mountain Lion also added a few other notable features. including Dictation, which converts speech to text, and AirPlay, which streams media to other Apple devices. OS 10.8 also includes Gatekeeper, a tool that protects your Mac from malicious software. The new Power Nap feature updates Mail, Contacts, Calendar, and Reminders data, on Mac laptops while they are in sleep mode. Additionally, Mountain Lion includes several new social networking features, such as a "Share" button added to applications, Facebook and Twitter integration, and Game Center, a social gaming network that was previously only available on iOS devices. NOTE: Like Lion, Apple's Mountain Lion operating system is not sold on a disc, but is only available as a download from the Mac App Store. Updated October 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mountain_lion Copy

Mouse

Mouse While most people don't want to see a mouse running around in their home, they typically don't have a problem seeing one sitting by their computer. This is because, along with the keyboard, the mouse is one of the primary input devices used with today's computers. The name comes from the small shape of the mouse, which you can move quickly back and forth on the mouse pad, and the cord, which represents the mouse's tail. Of course, if you are using a wireless mouse, the analogy does not work so well. All mice have at least one button, though most mice have two or three. Some also have additional buttons on the sides, which can be assigned to different commands. Most mice also have a scroll-wheel, which lets you scroll up and down documents and Web pages by just rolling the wheel with your index finger. Early mice tracked movement using a ball in the bottom of the mouse. This "mouse ball" pushed against different rollers as it moved, measuring the mouse's speed and direction. However, now most mice use optical technology, which uses a beam of light to track the mouse's motion. Optical mice are more accurate than roller-based mice and they have the added bonus of not getting dirty inside. A plurality of plurals When you refer to more than one mouse, you can call them either "mice" or "mouses," since both terms are acceptable. However, "mouses" is technically the correct version since "mice" is the plural form designated for living creatures. Still, most people have a hard time saying "mouses," which is why "mice" is more commonly used. Updated August 11, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mouse Copy

Mouse Pad

Mouse Pad A mouse pad, or "mousing surface," is a surface designed for tracking the motion of a computer mouse. Early mouse pads were literally "pads," which had soft surfaces. The slight cushion allowed the mouse ball to roll smoothly in any direction. Modern mouse pads typically have harder surfaces, designed to track the motion of optical mice. Mouse pads come in many shapes and sizes. Most are rectangular, though are some are circular or oval shaped. While the majority of mouse pads are less than a foot wide or long, graphic artists and CAD designers sometimes use mouse pads that are much larger. These super-sized mousing surfaces allow them to use a slow, precise mouse speed without needing to lift up the mouse very often. Computer gamers also prefer larger mouse pads, because they offer more desktop real estate for making long, fast motions. Since optical mice track motion by detecting small changes in the surface below the mouse, the mouse pad's surface affects how accurately the mouse responds to movement. Lighter colored mouse pads typically produce the best results since they reflect the most light. Some highly reflective mouse pads even claim to have "battery saving" capabilities for wireless mice. The improvement in battery life most likely comes from how quickly the mouse enters low-power mode when it stops moving. Since optical mice work on just about any opaque surface, you typically do not even need a mouse pad to use a mouse. However, a good mouse pad can provide smooth, accurate motion, which is beneficial if cursor accuracy is important for your work. When choosing a mouse pad, make sure you get one large enough for your needs, but not so large that it won't fit on your desk. If possible, choose a mouse pad with a built-in wrist rest or buy a separate wrist rest if necessary. This small ergonomic addition will relieve a lot of strain on your wrist no matter what mouse pad you use. Updated September 2, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mouse_pad Copy

MP3

MP3 Stands for "MPEG Audio Layer-3." MP3 is a compressed audio file format developed by the Moving Picture Experts Group (MPEG). A typical MP3 file sounds similar to the original recording but requires significantly less disk space. MP3 files are often about one-tenth the size of an uncompressed WAVE or AIFF file, which have the same audio quality as a CD. The small file size and high fidelity of MP3 files helped popularize digital music downloads in the late 1990s and early 2000s. Instead of requiring a 40 megabyte download for a single song, the comparable MP3 file might be 4 MB. MP3s made it possible for users to download entire albums in roughly the same time it took to download a single WAV or AIFF file. For over a decade, MP3s were the most common way to store music files on computers and portable music players like the iPod. While MP3s are still prevalent on the Internet, other file formats are now also used for audio compression. Advanced Audio Coding (AAC), for example, is used for songs on Apple's iTunes Store. Ogg Vorbis is an open container format that is frequently used for royalty-free compression and streaming. The MP3 File Format An MP3 file includes a header, metadata, and compressed audio. The header includes information about the audio, such as the version of the encoding, the bitrate, and the sample rate. A high bitrate and sample rate produces better audio quality, but also a larger file size. A common MP3 compression setting is a bitrate of 128 kbps and a sample rate of 44.1 kHz. This produces a file size of roughly one megabyte per minute. MP3 metadata provides information about the actual recording. This data is usually saved in an ID3 tag, which is a standard format supported by most hardware and software media players. The bulk of the MP3 file contents is the actual compressed audio, which is stored as binary data. File extension: .MP3 Updated October 24, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mp3 Copy

MPEG

MPEG Stands for "Moving Picture Experts Group." MPEG is an organization that develops standards for encoding digital audio and video. It works with the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) to ensure media compression standards are widely adopted and universally available. The MPEG organization has produced a number of digital media standards since its inception in 1998. Examples include: MPEG-1 – Audio/video standards designed for digital storage media (such as an MP3 file) MPEG-2 – Standards for digital television and DVD video MPEG-4 – Multimedia standards for the computers, mobile devices, and the web MPEG-7 – Standards for the description and search of multimedia content MPEG-MAR – A mixed reality and augmented reality reference model MPEG-DASH – Standards that provide solutions for streaming multimedia data over HTTP (such as servers and CDNs) Using MPEG compression, the file size of a multimedia file can be significantly reduced with little noticeable loss in quality. This makes transferring files over the Internet more efficient, which helps conserve Internet bandwidth. MPEG compression is so ubiquitous that the term "MPEG" is commonly used to refer to a video file saved in an MPEG file format rather than the organization itself. These files usually have a ".mpg" or ".mpeg" file extension. File extensions: .MP3, .MP4, .M4V, .MPG, .MPE, .MPEG Updated October 30, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mpeg Copy

MSP

MSP Stands for "Managed Service Provider." An MSP is a third-party company that remotely manages and often provides both network infrastructure and IT services on behalf of a customer. These services include digital security monitoring (often through a Security Operation Center) and hosting the customer's data. Traditionally, the term MSP only applied to companies that provided infrastructure or device-centric services. Application Service Provider referred to companies that offered remote assistance for customers' software and applications. Now, most MSPs provide active support for most, if not all, the customer's end-users and administration of systems such as Active Directory. An MSP that provides cybersecurity services for its clients may be called a Managed Service and Security Provider, or MSSP. It assists clients with a Security Operations Center that monitors the endpoints and network infrastructure. An MSSP may also create a disaster recovery plan and incident response should the client experience a breach in security, such as ransomware or hacking. MSP customers range from small and medium-sized organizations to Forbes 500 companies and large government entities. Due to the large span of client types, MSPs may offer different types of service-level agreements and support. For example, an enterprise contract may include 24/7 phone support, while a small business agreement may provide email support during business hours. Updated February 16, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/msp Copy

MTU

MTU Stands for "Maximum Transmission Unit." A network connection's MTU setting determines the largest size data packet that can travel over that connection. Different network connections support different MTU sizes and have their own default values, but networking devices may also set their own MTU size in their network settings. The most common default MTU value for home networks and TCP/IP internet connections is 1500 bytes, based on the standard MTU for Ethernet networks. A data packet's MTU value combines the size of its header information and its data payload. Larger MTU values allow more data to fit in each packet, reducing the proportional amount of data dedicated to overhead. However, smaller MTU values can reduce network latency since it takes less time to send a complete packet — all other conditions being equal, a device can receive and process two 600-byte packets before one 1500-byte packet finishes transferring. A network connection's MTU setting in macOS When a device tries to send a data packet that is too large and exceeds the connection's MTU, that data packet first needs to be broken up in a process known as fragmentation. The receiving device reassembles these packets back into their original size, but this adds extra overhead and can slow down data transfers. Some protocols, like IPv6, do not allow fragmentation at all — instead, routers will drop packets that exceed the MTU size and send ICMP requests for smaller packets from the device that sent them. Updated November 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/mtu Copy

Multi-Core

Multi-Core Multi-core technology refers to CPUs that contain two or more processing cores. These cores operate as separate processors within a single chip. By using multiple cores, processor manufacturers can increase the performance of a CPU without raising the processor clock speed. Since the upper threshold of clock speeds has leveled out during recent years, multi-core processors have become a common means to improve computing performance. Most modern computers have at least two cores, or a dual-core processor. Some high-end machines have four core (quad-core), six core (hexa-core), or eight core (octo-core) processors. While adding more cores does not increase the overall computing performance by a proportional amount (two cores do not equal twice the speed), multi-core processors do provide a substantial performance boost over a single-core CPUs. Additionally, a multi-core processor can run more efficiently than a single processor, since not all cores need to be active unless needed. For example, Intel's "Turbo Boost Technology" can turn off power to entire cores when they are not being used. It is important to understand that multiple cores are different than multiple CPUs. While a multi-core computer may contain two processing cores on a single chip, a multiprocessor computer may have two CPUs, each with a single processing core. Since multi-core computing is more energy and cost efficient, multi-core computers have become more popular than multiprocessor computers. However, some high-end machines combine the two technologies and include multiple CPUs, each with multiple cores. Updated November 20, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/multi-core Copy

Multi-Factor Authentication

Multi-Factor Authentication Multi-Factor Authentication (MFA) is a security measure that requires multiple levels of authentication before granting a user account access to a system. A user is required to provide multiple factors of identification, including something the user knows (like a user name and password), something the user has (like a security token or mobile device), and something the user is (a biometric identification like a fingerprint or face scan). If a system requires only a username and password as login information, it can be accessed by anyone who knows them — whether they're the authorized account owner or not. MFA systems add some extra steps to the authentication process to ensure that the person logging into the account is the account owner. Multi-Factor Authentication systems are similar to Two-Factor Authentication but with the option to require 2, 3, or more authentication steps. There are three categories of authentication factors used in MFA. An MFA system requires at least two of these factors to ensure that the correct user is logging in to the account. Knowledge is something that the user knows. This factor is usually their username and password. Other knowledge factors like a PIN, passphrase, or security question can be used in conjunction with the username and password. Possession is something that the user has. Since this is often their smartphone, a code can be sent to the phone via SMS or generated by an MFA passcode generator app. Security token devices are also common — these small devices connect to a computer to provide authentication or display a one-time-use code that the user enters. Inherent factors are something that the user is, so many types of MFA use biometric identification. Fingerprint or face scans are the most common types, but some systems may use voice recognition or retinal scans for identification. In many cases, users only need to provide additional authentication the first time they log in using a particular device. After successful authentication, the site sets a cookie in the web browser that tells the website that MFA was successful and that only the username and password are required. Changing devices or web browsers will cause another MFA check to take place. Updated November 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/multi-factor_authentication Copy

Multicast

Multicast A multicast is a transmission of data from a single source to multiple recipients. Multicasting is similar to broadcasting, but only transmits information to specific users. It is used to efficiently transmit streaming media and other types of data to multiple users at one time. The simple way to send data to multiple users simultaneously is to transmit individual copies of the data to each user. However, this is highly inefficient, since multiple copies of the same data are sent from the source through one or more networks. Multicasting enables a single transmission to be split up among multiple users, significantly reducing the required bandwidth. Multicasts that take place over the Internet are known as IP multicasts, since they use the Internet protocol (IP) to transmit data. IP multicasts create "multicast trees," which allow a single transmission to branch out to individual users. These branches are created at Internet routers wherever necessary. For example, if five users from five different countries requested access to the same stream, branches would be created close to the original source. If five users from the same city requested access to the same stream, the branches would be created close to users. IP multicasting works by combining two other protocols with the Internet protocol. One is the Internet Group Management Protocol (IGMP), which allows users or client systems use to request access to a stream. The other is Protocol Independent Multicast (PIM), which is used by network routers to create multicast trees. When a router receives a request to join a stream via IGMP, it uses PIM to route the data stream to the appropriate system. Multicasting has several different applications. It is commonly used for streaming media over the Internet, such as live TV and Internet radio. It also supports video conferencing and webcasts. Multicasting can also be used to send other types of data over the Internet, such as news, stock quotes, and even digital copies of software. Whatever the application, multicasting helps reduce Internet bandwidth usage by providing an efficient way of sending data to multiple users. Updated September 19, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/multicast Copy

Multimedia

Multimedia Multimedia is the integration of multiple types of media into a single presentation or package — text, images, video, and audio. Multimedia may even include interactive elements. The term may also refer to software used to create multimedia projects. As the power of personal computers expanded in the 1990s, multimedia software. Computers could play high-quality digital audio and video, and CD-ROM discs had enough storage capacity to provide it. Computer games were among the first titles to take advantage of that by adding video cutscenes, and CD-based encyclopedias like Microsoft Encarta provided students with the chance to see short videos mixed into their research materials. The combination of digital video, audio, text, and images is commonplace on the Internet today. For example, many news articles include text along with photos and videos. Presentation-making software allows you to embed animations, sound clips, and videos into a presentation (although you should still use them sparingly). Online references like Wikipedia can include videos and animations to help illustrate articles. Even music streaming software like Spotify or Apple Music mixes in animated lyrics and music videos. Updated December 15, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/multimedia Copy

Multiplatform

Multiplatform Multi-platform software is software developed for more than one operating system. Some software operates on multiple operating systems using a multi-platform framework, while other applications maintain separate codebases for each platform. Multi-platform software is also known as "cross-platform." There are multiple strategies for developing multi-platform software. Multi-platform frameworks built on web technologies are common for smaller applications since they allow a small team to deliver an application to a multi-platform audience. Larger teams can support separate versions of their applications with their own source code, where each version is written specifically for its platform. For example, Adobe Photoshop is a multi-platform app that maintains different codebases for each operating system, while the Slack app uses a multi-platform framework called Electron. In the consumer gaming market, multi-platform games run on more than one gaming machine. For example, a sports game developed for Xbox, Playstation, and PC would be a multi-platform game. If a game is developed exclusively for one system — i.e. a game in the "The Legend of Zelda" series for Nintendo — it is not multi-platform. Gaming hardware manufacturers use exclusive software to attract consumers to buy their systems. Updated November 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/multiplatform Copy

Multiprocessing

Multiprocessing For many years, the speed of computer processors increased through improvements in the architecture and clock speed of processors. However, in recent years, chip manufacturers have reached a limit in how small they can make the transistors inside CPUs without them overheating. Therefore, using multiple processors, or multiprocessing, has become the next step in increasing computing performance. Multiprocessing can be implemented in two different ways: 1) using more than one physical processor, or 2) using a processor with multiple cores. For example, early Power Mac G5 computers had multiple physical processors, each with their own heat sink and frontside bus. When Apple switched to using Intel processors in 2006, they began using dual-core processors. These chips look like a single processor, but act as two. Now, some machines like the Mac Pro, have quad-core processors, which include four processing cores. Some Mac Pros even have two physical quad-core processors, giving the computer a total of eight processors. Most Windows and Linux-based PCs now use multi-core processors as well. While multiprocessing sounds like a logical choice for improving computing performance, it must be supported by the computer's operating system in order to work correctly. Fortunately, current versions of both Windows and Mac OS X fully support multiprocessing. This means they can manage multiple processors as one CPU, dividing the processing load between them. Still, not all tasks can be split equally between two or more processors. Therefore, while multiprocessing may increase a computer's speed, it does not typically improve performance by the exact factor of processors in the machine. Updated September 26, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/multiprocessing Copy

Multitasking

Multitasking Multitasking is processing multiple tasks at one time. For example, when you see someone in the car next to you eating a burrito, taking on his cell phone, and trying to drive at the same, that person is multitasking. Multitasking also refers to the way a computer works. Unlike the phone and burrito juggling driver, a computer's CPU can handle many processes at one time with complete accuracy. However, it will only process the instructions sent to it by the computer's software. Therefore, to make full use of the CPU's capabilities, the software must be able to process more than one task at a time, or multitask. Early operating systems could run multiple programs at one time, but did not fully support multitasking. Therefore, a single program could consume the computer's entire CPU while performing a certain operation. Basic operating system processes, such as copying files, prevented the user from performing other tasks, such as opening or closing windows. Fortunately, since modern operating systems include full multitasking support, multiple programs can run at the same time without affecting each other. Also, multiple operating system processes can take place simultaneously. Since multitasking can handle several tasks at once, it also improves the stability of the computer. For example, if one process crashes, it will not affect the other running programs, since the computer handles each process separately. In other words, if you are in the middle of writing a paper in a word processing program and your Web browser unexpectedly quits, you won't lose your work. That's when you can really be thankful for multitasking. Updated September 26, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/multitasking Copy

Multithreading

Multithreading Multithreading is similar to multitasking, but enables the processing of multiple threads at one time, rather than multiple processes. Since threads are smaller, more basic instructions than processes, multithreading may occur within processes. By incorporating multithreading, programs can perform multiple operations at once. For example, a multithreaded operating system may run several background tasks, such as logging file changes, indexing data, and managing windows at the same time. Web browsers that support multithreading can have multiple windows open with JavaScript and Flash animations running simultaneously. If a program is fully multithreaded, the different processes should not affect each other at all, as long as the CPU has enough power to handle them. Similar to multitasking, multithreading also improves the stability of programs. However, instead of keeping the computer from crashing, multithreading may prevent a program from crashing. Since each thread is handled separately, if one thread has an error, it should not affect the rest of the program. Therefore, multithreading can lead to fewer crashes, which is something we can all be thankful for. Updated September 26, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/multithreading Copy

Music Tracker

Music Tracker A music tracker is a type of music sequencing software for arranging digital music. It allows composers to create music using sound samples and synthesized instruments using text and numbers in a spreadsheet-like grid. While music trackers were most popular in the 1980s and 1990s, they are still used now by creators of chiptune music. Unlike full-featured DAW applications, music trackers present musical information as text. Each column represents an audio channel, and each row represents a musical step. The text within each cell defines which sampled instrument to use, what note that instrument plays, and what modifiers to apply to that note (like pitch, volume, and audio effects). The resulting music file (known as a module file) contains only a few audio samples arranged and modified by text, so the file size of each module is smaller than a fully-recorded audio file. A module file in the MilkyTracker music tracker Music trackers create a module file out of patterns. Each pattern is a distinct section of a song that a composer can use to assemble the final arrangement, repeating and rearranging them programmatically. For example, a module file for a standard pop song would have patterns for the intro, verse, pre-chorus, chorus, and bridge. The composer can set the intro pattern to only play the first time through, then cycle through the other patterns several times (or even continue to play endlessly). Video games often used module files for dynamic music that changed speed, added or removed instrumentation on the fly, or included distinct patterns for boss music in a single module file. NOTE: Module files created by music trackers often use the .MOD or .XM file extensions. Updated May 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/music_tracker Copy

MVC

MVC Stands for "Model-View-Controller." MVC is an application design model comprised of three interconnected parts. They include the model (data), the view (user interface), and the controller (processes that handle input). The MVC model or "pattern" is commonly used for developing modern user interfaces. It is provides the fundamental pieces for designing a programs for desktop or mobile, as well as web applications. It works well with object-oriented programming, since the different models, views, and controllers can be treated as objects and reused within an application. Below is a description of each aspect of MVC: 1. Model A model is data used by a program. This may be a database, file, or a simple object, such as an icon or a character in a video game. 2. View A view is the means of displaying objects within an application. Examples include displaying a window or buttons or text within a window. It includes anything that the user can see. 3. Controller A controller updates both models and views. It accepts input and performs the corresponding update. For example, a controller can update a model by changing the attributes of a character in a video game. It may modify the view by displaying the updated character in the game. The three parts of MVC are interconnected (see diagram). The view displays the model for the user. The controller accepts user input and updates the model and view accordingly. While MVC is not required in application design, many programming languages and IDEs support the MVC architecture, making it a common choice for developers. Updated March 7, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mvc Copy

MySpace

MySpace MySpace is an online community that allows friends to keep in touch and meet new people as well. It started out as a website that bands could use to promote their music, but has since grown into a more general community of friends. Anyone who is at least 14 years old can sign up for a MySpace account at no cost. Once you sign up, you can customize your profile by adding information about yourself, listing your interests, hobbies, and educational background, and uploading photos of yourself and your friends. You can also create your own blog for others to read. Once you have created a profile on MySpace, you can search or browse other users' profiles. If you want to add someone as a friend, just click the "Add to Friends" link on that person's profile page. If the person approves your friend request, he or she will be added to your list of friends. Some users have only a few friends, while others have several thousand. You can send a private message to a user by clicking the "Send Message" link or post a comment on his or her page by clicking "Add Comment." Comments can be seen by all visitors to that person's profile, so be careful what you post! The "friends" concept is the heart and soul of MySpace. By building a list of friends, you have your own network of people readily accessible from your profile page. When you click on a friend's image, you can view their profile and all their friends. This makes is easy to meet friends of friends, and their friends, and so on. The number of people you can meet on MySpace is practically endless, which may be a part of the reason there are so many "MySpace addicts" out there. In order to create a MySpace account, you need to choose a username and password, which is used for logging in to your account. This gives you control over what appears on your profile page. The only way others can add content to your page is through comments, which you can choose to delete once you have logged in to your account. Updated March 6, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/myspace Copy

MySQL

MySQL MySQL is an open-source relational database management system that uses SQL to access and modify data. The "relational" aspect means it stores data in tables with rows and columns of related data. This approach makes data management more efficient and allows for easy access and retrieval of information. MySQL, pronounced both "My S-Q-L" and "My Sequel," is widely recognized for its speed, reliability, and flexibility. It is used by database administrators, web developers, and software programmers to manage large datasets. PHP and MySQL are popular technologies used to dynamic websites. They are both open source and are often installed by default on Linux-based servers. The combination of Linux, Apache, MySQL, and PHP is sometimes referred to as "LAMP." Sample SQL code used to access data in a MySQL database One of MySQL's key features is its scalability. It can handle a wide range of data, from a few rows to millions of records. Its scalability makes it suitable for both small projects and large, enterprise-level applications. Additionally, MySQL supports a wide range of data types, allowing developers to work with various forms of data, from simple text and numbers to complex binary data. MySQL operates on a client-server model. The server runs on a machine hosting the database, while clients connect to the server to access the database. Multiple clients can access the database simultaneously, making MySQL ideal for applications that require concurrent data access. The security of MySQL is robust, providing encrypted connections and authentication processes that effectively safeguard sensitive data from unauthorized access. Its comprehensive support for transaction processing, which allows for the grouping of several steps into a single process, is a crucial aspect of data integrity. This is particularly important in applications like e-commerce websites, where accurate and secure transactions are a necessity. Updated June 21, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/mysql Copy

N-Key Rollover

N-Key Rollover N-key rollover, or "NKRO," is a feature of high-end keyboards that detects all keystrokes no matter how many keys are pressed simultaneously. It ensures every keystroke is recorded and prevents "ghosting," in which an extra keystroke may be registered when multiple keys are pressed together. In a basic keyboard, the electrical connections from the base of the keys to the keyboard output (typically a USB connection) may intersect. If you press multiple keys at the same time, the keyboard may register some keystrokes but not others. Ghosting happens when the data from two or more keystrokes gets combined and registers input from a key you did not press. Most keyboards have some level of rollover, meaning they support multiple simultaneous keystrokes. For example, an inexpensive keyboard might have 5-key rollover, while another might have 8-key rollover. This type of rollover is also called "x-key rollover," where x equals the number of keys that can be pressed and recorded simultaneously. N-key rollover means there is no limit to how many keys can be pressed and recorded at once. This feature is typically found in high-end keyboards, such as gaming keyboards used in eSports. Gamers often press several keys at once while playing video games, so it is essential for the keyboard to register all the input accurately. NOTE: The USB interface initially only supported 6 simultaneous keystrokes plus 4 modifier keys. Modern USB interfaces do not have a simultaneous keystroke limit. Updated November 14, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/n-key_rollover Copy

Name Server

Name Server A name server is a computer that translates domain names into IP addresses. Name servers serve a fundamental role in the Domain Name System (DNS). They allow a user to visit a website by entering its domain name into their web browser's address field instead of manually entering the web server's IP address. Every registered domain name must list at least two name servers in its DNS record. Known as "authoritative name servers," these name servers maintain a record of the IP addresses for that domain name's web server, mail server, subdomains, and other services. These name servers often use the naming convention ns1.example.com and ns2.example.com. The first-listed name server gets checked first, while the second-listed server acts as a backup in case the primary does not respond. The Domain Name System relies on several layers of name servers that fill separate roles. When your web browser first make a DNS request, it checks with the DNS name server specified by your network's DHCP settings — typically, a name server operated by your ISP. If that name server doesn't have the domain name's record, your computer begins checking other servers starting from the top. The highest-level name servers are called "root DNS servers" and maintain the records of the name servers for each top-level domain (e.g. .com, .net, .ca). Each TLD's name server has the record for the authoritative name server for every domain using that TLD, which then directs your computer to the IP address of that domain's web server. A DNS request involves multiple name servers before it routes the request to the correct web server Large enterprise networks often have their own name servers to direct domain lookups within their intranet. Internal name servers let users access internal web portals, printers, and other resources using internal domain names instead of IP addresses. These resources often use private IP addresses that are not accessible from outside of the local intranet. Updated July 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/name_server Copy

Namespace

Namespace A namespace is a group of related elements that each have a unique name or identifier. There are several different types of namespaces, and each one has a specific syntax used to define the corresponding elements. Each element within a namespace has a "local name" that serves as a unique identifier. Namespaces are used in many areas of computing, such as domain names, file paths, and XML documents. Below are examples of these different applications. Domain Names - The namespace syntax for domain names is specified by the Domain Name System, or DNS. It includes the top-level domain, (e.g. "techterms.com") and a subdomain, such as "www." In the URL "www.techterms.com," the namespace identifier is "techterms.com," while the local name is "www." File Paths - File locations may be specified using a file path, which can include multiple directories. A file path, which uses syntax defined by the operating system, is considered a namespace. For example, C:Program FilesInternet Explorer is the namespace that describes where Internet Explorer files on a Windows computer. The namespace /usr/local/apache/ defines the location of Apache files on a Unix-based web server. Individual filenames within these directories serve as unique identifiers. XML Documents - XML namespaces (XMLNS) are used to associate a document's element and attribute names with a namespace identified by an external URI. For example, an XML file may include HTML elements that are specified at "http://www.w3.org/1999/xhtml." This reference might appear as "" near the top of the XML document. The above examples are just a few types of namespaces used in computing. They are also used to define network devices and other types of computer hardware. Additionally, computer programmers often used namespaces to group related variables within the source code of a program. While there are many different types of namespaces, they all serve the same purpose — to contain a logical grouping of related elements. Updated April 25, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/namespace Copy

NaN

NaN Stands for "Not a Number." NaN is a term used in mathematics and computer science to describe a non-numeric value. It may also be a placeholder for an expected numeric result that cannot be defined as a floating point number. There are two primary types of ways in which NaN may be generated: 1) a mathematical calculation and 2) non-numeric input. The following mathematical calculations produce NaN because the result is undefined: 0 ÷ 0 0 x ∞ ∞ ÷ ∞ When a calculation involves a character, string, or other non-numeric value, the result may also be NaN. For example, 20 x "horse" does not produce a numeric result since 20 is an integer and "horse" is a string. A function may return NaN as a result of invalid input, which is a preferred alternative to a program crash. Some spreadsheet and database programs display NaN or #NaN in a table cell when the cell formula has not received valid numeric input for the calculation. Different programming languages handle NaN values in different ways. For example, in JavaScript, NaN is a property of a global object (i.e. Number.NaN). JavaScript provides an isNan() function to check if a value is NaN. PHP uses the function is_nan() for the same purpose. Both return a boolean value of true or false. Updated June 28, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nan Copy

NAND

NAND NAND is the most common type of flash memory. It is used in several types of storage devices, including SSDs, USB flash drives, and SD cards. NAND memory is non-volatile, meaning it retains stored data even when the power is turned off. What does NAND stand for? Surprisingly, NAND is not an acronym. Instead, the term is short for "NOT AND," a boolean operator and logic gate. The NAND operator produces a FALSE value only if both values of its two inputs are TRUE. It may be contrasted with the NOR operator, which only produces a TRUE value if both inputs are FALSE. NAND vs NOR Flash Memory NAND flash memory contains an integrated circuit that uses NAND gates to store data in memory cells. NOR flash memory stores data using NOR gates. While NOR devices read data faster, they are slower at writing data and do not store data as efficiently. NAND devices write and erase data faster and store significantly more data than NOR devices of the same physical size. Overall, NAND storage is more efficient than NOR, which is why NAND is the most popular type of flash memory. As read and write speeds have improved, NAND devices have become faster than traditional hard drives. Therefore, SSDs and integrated flash memory have replaced HDDs in most computers. Updated July 25, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nand Copy

Nanometer

Nanometer A nanometer (also "nanometre") is a unit of measurement used to measure length. One nanometer is one billionth of a meter, so nanometers are certainly not used to measure long distances. Instead, they serve to measure extremely small objects, such as atomic structures or transistors found in modern CPUs. A single nanometer is one million times smaller than a millimeter. If you take one thousandth of a millimeter, you have one micrometer, or a single micron. If you divide that micron by 1,000, you have a nanometer. Needless to say, a nanometer is extremely small. Since integrated circuits, such as computer processors, contain microscopic components, nanometers are useful for measuring their size. In fact, different eras of processors are defined in nanometers, in which the number defines the distance between transistors and other components within the CPU. The smaller the number, the more transistors that can be placed within the same area, allowing for faster, more efficient processor designs. Intel's processor line, for example, has included chips based on the 90-nanometer, 65-nanometer, 45-nanometer, and 32-nanometer processes. In 2011, Intel released chips that were created using a 22-nanometer process. Intel's "Broadwell" design, introduced in 2014, uses a 14-nanometer process. Abbreviation: nm Updated August 14, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nanometer Copy

NAS

NAS Stands for "Network Attached Storage." A typical computer stores data using internal and external hard drives. If the computer is connected to a network, it can share data on its connected hard drives with other systems on the network. While this allows multiple computers to send data back and forth, it requires that each computer share its files individually. Therefore, if a computer is turned off or disconnected from the network, its files will not be available to the other systems. By using NAS, computers can store and access data using a centralized storage location. Instead of each computer sharing its own files, the shared data is stored on a single NAS server. This provides a simpler and more reliable way of sharing files on a network. Once an NAS server connected to a network (typically via Ethernet), it can be configured to share files with multiple computers on the network. It may allow access to all systems or may provide access to a limited number of authenticated machines. NAS servers typically contain multiple hard drives, providing a large amount of shared disk space for connected systems to save data. They are often used in business networks, but have become increasing more common in home networks as well. Since NAS uses a centralized storage device, it can be a simple way for family members to share and backup their data. Updated March 2, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nas Copy

NAT

NAT Stands for "Network Address Translation." NAT is a method of translating multiple internal IP addresses of computers on a local network to a single shared global IP address. Network routers use NAT to direct incoming traffic from the Internet to the correct destination on the local network. In addition to allowing multiple computers to share a single global IP address, NAT improves the security of a network by obscuring the local IP addresses of individual computers from the outside. Every device on a LAN has a private IP address for communication within that network. Typically, the only device on a local network with a public and globally-unique IP address is the modem or gateway that provides its connection to the Internet. The network's router uses NAT to translate every outgoing data packet by changing the source IP address from the computer's private address to the gateway's public one. It also replaces the source's port number with a new unique port number and saves that port to a record in a NAT table. When the router receives incoming data packets, it identifies the destination using the port number. It looks up that port in the NAT table and translates the destination address back to the correct computer's private address. NAT translating private IP addresses to a public IP address, using a unique port to identify the local computer IPv4 uses 32-bit addresses, which limits the total available namespace to roughly 4 billion addresses; reserved blocks for private networks take up several million of those. Since this number is insufficient for every Internet-connected device to have a unique address, NAT is necessary to provide networked computers with an Internet connection. IPv6 uses 128-bit addresses, which provides enough address space for several quadrillion unique devices but is not yet universally adopted. Updated June 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/nat Copy

Native File

Native File A native file is a file format created for a specific software program. A program's native file format supports all of the program's features and functionality in a way that other similar file types may not. Other programs may or may not recognize these proprietary file types without conversion, which may affect how some data is formatted. If a program has a native file format, it will save new documents in that format by default. By designing the native file format along with the application itself, developers can ensure that the native file supports all of the application's features. For example, Microsoft Word's native file type is DOCX, which supports every text formatting and page layout feature present in Word. Using the Save As... command to save a Word document as another file type, like RTF or ODT, may cause some elements to lose their formatting, change appearance, or lose their editability. If you want to preserve a native file's appearance while sharing it with someone who doesn't have the application, saving a copy of it in a more generic format, like PDF, is a good idea. Several native files for common applications Not all applications necessarily have a native file type. Some applications save to standard file formats instead of proprietary ones. For example, basic image editors like Microsoft Paint save new files as common image formats like BMP and PNG, while many text editors use TXT and RTF as their default file types. Updated August 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/native_file Copy

Natural Number

Natural Number A natural number is an integer greater than 0. Natural numbers begin at 1 and increment to infinity: 1, 2, 3, 4, 5, etc. Natural numbers are also called "counting numbers" because they are used for counting. For example, if you are timing something in seconds, you would use natural numbers (usually starting with 1). When written, natural numbers do not have a decimal point (since they are integers), but large natural numbers may include commas, e.g. 1,000 and 234,567,890. Natural numbers will never include a minus symbol (-) because they cannot be negative. In computer science, natural numbers are commonly used when incrementing values. For example, in a for loop, the counter often increases by one with each iteration. Once the counter hits the limit (e.g., 10 in for (i=1; i<10; i++)), the loop is broken and the code after the loop is processed. Updated June 12, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/natural_number Copy

Navigation Bar

Navigation Bar A navigation bar is a user interface element within a webpage that contains links to other sections of the website. In most cases, the navigation bar is part of the main website template, which means it is displayed on most, if not all, pages within the website. This means that no matter what page you are viewing, you can use the navigation bar to visit other sections of the website. A website navigation bar is most commonly displayed as horizontal list of links at the top of each page. It may be below the header or logo, but it is always placed before the main content of the page. In some cases, it may make sense to place the navigation bar vertically on the left side of each page. This type of navigation bar is also called a sidebar, since it appears to the side of the primary content. Some websites have both a horizontal navigation bar at the top and a vertical navigation bar on the left side of each page. The navigation bar is an important element of a website's design since it allows users to quickly visit any section within the site. If you've ever visited a website without a navigation bar, you may have found it is difficult to locate the page you need. You may have also had to click "Back" several times in order to find a link to the next section you wanted to visit. Thankfully, web design has become more standardized in recent years and nearly every website now includes a navigation bar. NOTE: Developers often refer to the navigation bar as the "navbar." Updated July 26, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/navigation_bar Copy

Net Neutrality

Net Neutrality Net Neutrality is the principle that all Internet traffic is to be treated equally by Internet Service Providers (ISPs), regardless of its origin or the protocol it uses. While Net Neutrality protections are in place, an ISP can not block, throttle, or prioritize traffic from specific websites or services (outside of performing nondiscriminatory traffic management and blocking illegal material). Abiding by these principles ensures that the Internet remains a level playing field for websites and services regardless of their relationships with individual ISPs. The concept of Net Neutrality reflects how the Internet worked in its early days. Early ISPs provided their customers with a connection to the Internet and treated all data packets on their network equally from end to end. The content of the data packet did not matter — whether it was a webpage, email message, or an online game, a packet was just sent to its destination as fast and efficiently as possible. However, as the Internet matured and became more popular, ISPs began throttling or blocking certain types of traffic, like BitTorrent file transfers. Internet activists argued that discriminating against certain types of traffic limited innovation, leading to early attempts at Net Neutrality regulations in the 2000s. Without those protections, an ISP could actively meddle in how their customers browse the Internet against their customer's wishes. An ISP could make side deals with specific websites to prioritize their traffic in a "fast lane," throttling the traffic of sites that don't pay. If the ISP also operated a video streaming service, they could prioritize their service over their competitors. They could also limit some types of traffic like VPN connections to only customers on a premium-priced Internet plan, or block certain traffic entirely. Net Neutrality Around the World The rules regarding Net Neutrality within a country are the responsibility of that country's regulatory agencies. These regulations have exemptions that allow ISPs to perform some traffic management, although the specifics of those exemptions vary widely from country to country. For example, ISPs may still apply some level of traffic shaping to maintain network performance by throttling certain types of traffic that would otherwise overwhelm a network. Updated October 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/net_neutrality Copy

NetBIOS

NetBIOS Stands for "Network Basic Input/Output System." NetBIOS was introduced in 1983 by IBM as an improvement to the standard BIOS used by Windows-based computers. The BIOS provides an interface between the computer's operating system and the hardware. As the name implies, NetBIOS adds support for networking, including the ability to recognize other devices connected to the network. NetBIOS provides an API (Application Program Interface) for software developers to use. The NetBIOS API includes network-related functions and commands, which can be incorporated into software programs. For example, a programmer can use a prewritten NetBIOS function to enable a software program to access other devices on a network. This is much easier than writing the networking code from scratch. In other words, NetBIOS prevents programmers from having to "reinvent the wheel" just to get their program to connect to a network. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/netbios Copy

Netiquette

Netiquette Netiquette is short for "Internet etiquette." Just like etiquette is a code of polite behavior in society, netiquette is a code of good behavior on the Internet. This includes several aspects of the Internet, such as email, social media, online chat, web forums, website comments, multiplayer gaming, and other types of online communication. While there is no official list of netiquette rules or guidelines, the general idea is to respect others online. Below are ten examples of rules to follow for good netiquette: Avoid posting inflammatory or offensive comments online (a.k.a flaming). Respect others' privacy by not sharing personal information, photos, or videos that another person may not want published online. Never spam others by sending large amounts of unsolicited email. Show good sportsmanship when playing online games, whether you win or lose. Don't troll people in web forums or website comments by repeatedly nagging or annoying them. Stick to the topic when posting in online forums or when commenting on photos or videos, such as YouTube or Facebook comments. Don't swear or use offensive language. Avoid replying to negative comments with more negative comments. Instead, break the cycle with a positive post. If someone asks a question and you know the answer, offer to help. Thank others who help you online. The Internet provides a sense of anonymity since you often do not see or hear the people with whom you are communicating online. But that is not an excuse for having poor manners or posting incendiary comments. While some users may feel like they can hide behind their keyboard or smartphone when posting online, the fact is they are still the ones publishing the content. Remember – if you post offensive remarks online and the veil of anonymity is lifted, you will have to answer for the comments you made. In summary, good netiquette benefits both you and others on the Internet. Posting a positive comment rather than a negative one just might make someone's day. Updated December 30, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/netiquette Copy

Netmask

Netmask The terms netmask and subnet mask are often used interchangeably. However, subnet masks are used primarily in network configurations, while netmasks typically refer to classes of IP addresses. They are used to define a range of IP addresses that can be used by an ISP or other organization. There are three standard classes of IP addresses – A, B, and C – which have the following netmasks: Class A: 255.0.0.0 Class B: 255.255.0.0 Class C: 255.255.255.0 A Class A netmask defines a range of IP addresses in which the first three digit section is the same, but the other sections may each contain any number between 0 and 255. Class B addresses have the same first two sections, but the numbers in the second two sections can be different. Class C addresses have the same first three sections and only the last section can have different numbers. Therefore, a Class C IP address range may contain up to 256 addresses. Technically, a netmask is a 32-bit value used to divide sections of IP addresses. While a class C netmask is commonly written "255.255.255.0," it may also be defined as 11111111.11111111.11111111.00000000. This binary representation reveals the 32 bits that make up the netmask (4 sections of 8 bits each). It also shows the way the netmask "masks" the IP addresses it contains. The sections with all 1's are predefined and cannot be changed, while the section with all 0's can be any number between 0 and 255. Updated August 10, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/netmask Copy

Network

Network A network consists of multiple devices that communicate with one another. It can be as small as two computers or as large as billions of devices. While a traditional network is comprised of desktop computers, modern networks may include laptops, tablets, smartphones, televisions, gaming consoles, smart appliances, and other electronics. Many types of networks exist, but they fall under two primary categories: LANs and WANs. LAN (Local Area Network) A local area network is limited to a specific area, such as a home, office, or campus. A home network may have a single router that offers both wired and wireless connections. For example, a computer may connect to the router via Ethernet, while smartphones and tablets connect to the router via Wi-Fi. All devices connected to the router share the same network and often the same Internet connection. A larger network, such as the network of an educational institution, may be comprised of many switches, hubs, and Ethernet cables. It may also include multiple wireless access points and wireless repeaters that provide wireless access to the network. While this type of network is much more complex than a home network, it is still considered a LAN since it is limited to a specific location. WAN (Wide Area Network) A wide area network is not limited to a single area, but spans multiple locations. WANs are often comprised of multiple LANs that are connected over the Internet. A company WAN, for example, may extend from the headquarters to other offices around the world. Access to WANs may be limited using authentication, firewalls, and other security measures. The Internet itself is the largest WAN since it encompasses all locations connected to the Internet. Updated January 19, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/network Copy

Network Topology

Network Topology Network topology describes the arrangement of nodes in a computer or communications network. It may refer to the physical topology of a network, which describes how each device physically connects to each other; it may also refer to a network's logical topology, which describes how data travels over a network between devices. Both physical and logical network topology affect the overall performance of a network, as well as how easy it is to expand and add new devices. There are several common physical network topology designs used in modern networks, each with benefits and drawbacks. Several common examples are listed below: Star networks use a central node that connects to every other node on the network. All data travels through that central node (like a router or a switch), which analyzes each data packet and routes it to its destination. Star networks are also known as hub-and-spoke networks. Tree networks are a type of hybrid Star network. A Tree network starts with a central "root" node, which connects to the central nodes of several Star networks. Ring networks form a circle, connecting each node to two adjacent nodes. Data traveling over a ring hops from node to node until it reaches its destination. Line networks are similar to Ring networks, except that the endpoints do not connect and form a circle. Mesh networks provide direct connections between all of the network's nodes. Mesh networks are very resilient, since there's a lot of redundancy, but they are complex to manage and do not scale well to large sizes. Bus networks provide a single central cable, or "bus," as a backbone for the network. Each node connects to the bus and listens for data packets addressed to it. While a network's physical topology is important, so is its logical topology — how data travels between devices and nodes over a network's physical connections. Network administrators can change a network's logical topology separately from its physical topology, allowing them to make changes without redesigning a network's infrastructure and moving network devices. For example, admins may create virtual networks to isolate certain devices from the rest of the network, even though the physical topology does not change. Updated September 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/network_topology Copy

Neural Network

Neural Network A neural network is an artificial intelligence computing system modeled after the way a human brain works. It uses layers of interconnected nodes as artificial neurons to process data and solve a given problem. The system learns over time by adjusting the connections between neurons and the functions performed by each one. AI developers use artificial neural networks to train artificial intelligence algorithms for different tasks. Computer vision neural networks learn to analyze images to recognize objects, allowing them to quickly identify faces, label images, and help moderate content. They learn to perform speech recognition to transcribe conversations and add captions to videos. Neural networks can also learn natural language processing to analyze written language to organize documents, generate article summaries, and serve as chatbots. How Neural Networks Work Each node in a neural network contains a mathematical function. Nodes are arranged in several layers — an input layer, several intermediate layers, and an output layer. Each node is connected to any number of nodes on the layers above and below it. A node takes input from lower-level nodes, assigns a weight value to each input, and performs a calculation on the combined values. If the result meets a pre-determined threshold, the node passes it onto the next level. This process repeats until the neural network passes the final value to the output layer. During the initial "training" phase, developers give neural networks various sets of data. For example, if the goal is to recognize faces, the data may be photos of different people. The first few times the system analyzes the data, the output may not be accurate. However, on subsequent analyses, it "learns" by changing the weight and threshold values for each node. The neural network is successfully trained once it consistently produces accurate results for the training data. Updated February 24, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/neural_network Copy

Newbie

Newbie A newbie is a person who has recently started using a certain technology. It may refer to hardware – such as computer or smartphone – or software, such as an app, website, or web forum. The term "newbie" is a modern variation of the phrase "new boy," which was historically used to describe someone just starting school or a new job. While "newbie" sounds like a derogatory term, it does not have strong negative connotations. After all, in the modern era, everyone is constantly learning new technologies. Everyone is a newbie in some area. Even highly technical people may be considered newbies when learning new technologies. This is often the case when someone switches from one platform to another. For example, a person who has used an iPhone for many years may be an Android newbie if he or she buys a Google Pixel. A PC user might be considered an Apple newbie if he or she buys a Mac. Newbie vs Noob The term "noob" (or "n00b" in leetspeak) is a shortened version of "newbie." While the terms are sometimes used interchangeably, "noob" is commonly used in online gaming. The term "noob" may be considered derogatory since it implies the user lacks the skill or competence to be competitive. For example, if one player dominates another in an online multiplayer game, he may call the other person a noob at the end of the match. Of course, this is poor netiquette and the best way to respond is to improve your gaming skills and beat the player the next time you play. Updated January 26, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/newbie Copy

Newline

Newline Newline is a character that marks the end of a line of text. As the name implies, it is used to create a new line in a text document, database field, or any other block of text. When typing in a word processor, you can enter a newline character by pressing the "Enter" or "Return" key on your keyboard. This creates a line break (also known as a "carriage return" or "line feed") in the text and moves the cursor to the beginning of the next line. When a line break occurs at the end of a block of text, it is called a trailing newline. The newline character is important in computer programming, since it allows programmers to search for line breaks in text files. For example, if a data file lists one element per line, the items can be delimited by newline characters. In most modern programming languages, the newline character is represented by either "n" or "r". Some databases, like MySQL, store line breaks using a combination of "rn". By searching for newline characters in text strings, programmers can parse documents line by line and remove unwanted line breaks. Updated March 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/newline Copy

Newsgroup

Newsgroup A newsgroup is an online discussion forum accessible through Usenet. Each newsgroup contains discussions about a specific topic, indicated in the newsgroup name. You can browse newsgroups and post or reply to topics using a newsreader program. Access to newsgroups also requires a Usenet subscription. Most Usenet providers offer monthly access for around $10 USD per month. Newsgroups may be either moderated or unmoderated. In a moderated newsgroup, a moderator must approve posts in order for them to become part of the discussion. In an unmoderated group, everything posted is included in the discussion. Some newsgroups may also use bots to moderate the content, automatically eliminating posts that are deemed offensive or off topic. While many people now use web forums and online chat instead of newsgroups, the service is still popular around the world. In fact, there are estimated to be over 100,000 newsgroups in existence. While many newsgroups host traditional text-based discussions, a large number of newsgroups are now used for file sharing. These newsgroups, which primarily provide links to files, often have the term "binaries" in their name. Newsgroup Examples Below are some examples of active newsgroups. The first part of the name (before the first dot) is the primary category (or hierarchy) or the newsgroup. For example, sci. is used for science-related discussions. alt.politics talk.religion sci.physics comp.software.testing alt.binaries.documentaries alt.binaries.multimedia.comedy You can browse discussions and post to newsgroups using a newsreader. Updated March 30, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/newsgroup Copy

Nextdoor

Nextdoor Nextdoor is a social media platform that connects neighbors. It provides a way for people who live in the same local area to interact and communicate online. While services like Facebook and Instagram allow users to build a network of friends and followers, Nextdoor automatically connects people by physical proximity. Users can see posts from anyone in their neighborhood or surrounding area, providing an open forum for neighbors. Since Nextdoor is local community-oriented, users often post comments about their neighborhood. For example, it's commonplace to share concerns about suspicious activity. It's also an ideal platform to post a notice of a lost item or missing pet. Many cities have authorized Nextdoor accounts, which they use to provide official announcements and updates to residents. Nextdoor also serves as a local marketplace, allowing neighbors to post items for sale. Local businesses can market their services through sponsored posts. People with similar interests can join groups and plan local events. In some ways, Nextdoor is the opposite of the metaverse since it aims to connect neighbors in the real world. Updated January 8, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nextdoor Copy

NFC

NFC Stands for "Near Field Communication." NFC is a short-range wireless technology that enables simple and secure communication between electronic devices. It may be used on its own or in combination with other wireless technologies, such as Bluetooth. The communication range of NFC is roughly 10 centimeters. However, an antenna may be used to extended the range up to 20 centimeters. This short range is intentional, as it provides security by only allowing devices to communicate within close proximity of each other. This makes NFC ideal for secure transactions, such as contactless payments at a checkout counter. If your smartphone or smartwatch supports NFC, you can enable a contactless payment option such as Apple Pay or Android Pay. After linking a credit card to your device, you can make payments by simply holding your device near an NFC-enabled payment machine. There are many other uses for NFC as well. Examples include: Paying a fare on public transit, such as a bus or train Confirming your ticket at a concert or sports event Syncing workout data from a fitness machine with your device Viewing special offers on your phone when you enter a store Loading information about an artist or piece of art at a museum Viewing a map and related information at a national park Loading a translated menu at a restaurant Checking in and checking out at a hotel Unlocking an NFC-enabled door lock NFC is often seen as an alternative to QR codes, which require scanning a square code with your device. Both technologies are designed for short-range transactions, but QR code scanning requires a camera, while NFC communication requires a near-field communication chip. NFC communication is arguably simpler since you don't need to manually scan anything with your device. Additionally, NFC provides two-way communication, while scanning a QR code is a one-way transaction. NOTE: Since the range of NFC is limited to only a few centimeters, it is ideal for quick data transfers and transactions. For communication that requires a longer time or longer distance, NFC Connection Handover technology may be used to transition the connection to Bluetooth. Updated April 7, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nfc Copy

NFS

NFS Stands for "Network File System." NFS is a protocol used to access files over a network. It was developed by Sun Microsystems and introduced in 1989. The protocol is platform-independent, meaning it works across multiple operating systems and network configurations. While "file system" makes up two-thirds of the NFS acronym, the Network File System is not an actual file system like NTFS or APFS. Instead, it is a protocol that provides standard commands for accessing files from network-based storage locations. It is built on the RPC protocol and uses remote procedure calls to access files. Examples of NFS commands include: NFSPROC_LOOKUP() - find files based on the filename NFSPROC_READ() - read from a file NFSPROC_WRITE() - write to a file NFS can mount shared files in a local directory, allowing client systems to access remote data as a local folder. The client can traverse subdirectories, look up file permissions, and read, write, and create files. NFS translates the file paths and file commands to work with the corresponding file system. NOTE: NFS is an open standard, meaning any developer can add Network File System support to an application. However, in order for NFS to function, both the server and client systems must support NFS. Updated February 10, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nfs Copy

NFT

NFT Stands for "Non-Fungible Token." An NFT is a digital asset, such as an image or video that someone can purchase. "Non-fungible" means the asset is not replaceable or redistributable. Unlike a stock photo (or video), which multiple people can buy, only one person can own an NFT. Common NFTs include digital photos, artwork, animations, and collectible sports cards. Anyone can create an NFT, but most are produced and sold by celebrities or known artists. When you buy an NFT, you receive official ownership rights to the asset. While digital media can be easily copied, the value of an NFT is its ownership rights. NFT licenses are stored in a blockchain, similar to a Bitcoin transaction record. When someone purchases an NFT, the transaction is recorded in the blockchain, and that person acquires exclusive rights to the asset. Are NFTs Unique? Each NFT is unique, but creators may offer multiple "editions" of an NFT. For example, an artist may sell five editions of the same digital painting, which five different people can purchase. A sports figure may offer several hundred or even several thousand editions of a digital sports card. Generally, the more editions, the less value each NFT has. Non-fungible tokens spiked in popularity in 2021, with many celebrities and artists jumping on the bandwagon. While some NFTs have sold for several million dollars, the long-term value of NFTs remains to be seen. Updated December 23, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nft Copy

NIC

NIC Stands for "Network Interface Card" and is pronounced "nick." A NIC is a component that provides networking capabilities for a computer. It may enable a wired connection (such as Ethernet) or a wireless connection (such as Wi-Fi) to a local area network. NICs were commonly included in desktop computers in the 1990s and early 2000s. In the 1980s and early 1990s, many computers did not include networking capabilities, so a NIC could be added as an expansion card. Most NICs were installed in a PCI slot on the motherboard. Early NICs included a BNC connector for coax network connections, though Ethernet ports soon became the standard. Therefore most NICs include one or more Ethernet ports. As wireless networking became more popular, wireless NICs also grew in popularity. Instead of an Ethernet port, wireless NICs are designed for Wi-Fi connections and often have an antenna to provide better wireless reception for the computer. Older wireless cards have PCI connections while most modern wireless NICs connect to a PCI Express slot. Since many different networking standards exist, it is best to match the specifications of a NIC to the standard of the network. For example, if you are connecting to a Gigabit Ethernet network, a Gigabit Ethernet NIC is the best choice. A 100Base-T card will work, but you will only get 1/10 of the possible data transfer rate. A 10 Gigabit Ethernet Card may also work, but you will only experience gigabit speeds on the network. Wireless cards also use the lowest common denominator between the network and the NIC. However, if a wireless card does not support a newer wireless standard (such as 802.11ac), it may not be able to connect to the network. NIC vs Network Adapter Technically, a NIC is a physical card that connects to an expansion slot in a computer. Many computers and wireless devices now include an integrated networking component called a network adapter. This may be an Ethernet controller and port attached to the edge of a motherboard or a small wireless networking chip located on the motherboard. A network adapter may also be a small peripheral that connects to a USB port. While the terms "NIC" and "network adapter" are often used synonymously, a NIC is a type of network adapter while a network adapter is not necessarily a NIC. Updated March 14, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nic Copy

NINO

NINO Stands for "Nothing In, Nothing Out." The acronym NINO (pronounced "nee-no") is a computer science term that states if nothing is entered into a program, nothing is produced. It can also be translated, "No Input, No Output." Computers operate by processing information. If there is no input (or information to process), there can be no output. Input can be entered by a human, such as typing text in a word processor or clicking on a link in a web browser. It can also be entered by software, such as a bootstrap operation or a bot that automatically executes commands. In computer programming, NINO can explain why a function does not produce a result. If it does not receive the parameters it needs to run correctly, the function may fail or produce a NULL value. If the function receives invalid input, it may either return nothing or a "garbage" result. This process can be summarized as GIGO (Garbage In, Garbage Out). A well programmed function checks all input and produces an error message if the data is either missing or invalid. This prevents NINO and GIGO errors since bad input can cause bugs or crashes within a program. Updated April 30, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nino Copy

NMS

NMS Stands for "Network Management System." An NMS is a system designed for monitoring, maintaining, and optimizing a network. It includes both hardware and software, but most often an NMS refers to the software used to manage a network. Network management systems provide multiple services. These include, but are not limited to: Network monitoring - NMS software monitors network hardware to ensure all devices are operating correctly and are not near or at full capacity. Alerts can be sent to network administrators if a problem is detected. Device detection - When a new device is connected to the network, the NMS detects it so that it can be recognized, configured, and added to the network. This is also called device provisioning. Performance analysis - An NMS can gauge the current and historical performance of a network. This includes the overall performance of the network as well as individual devices and connections. For example, the NMS may detect aspects of a network where throughput is nearing the maximum bandwidth available. The data can be used to optimize the flow of traffic and recommend the addition of new hardware if needed. Device management An NMS can provide a simple way to manage multiple devices from a central location. It may be used to configure a device or modify settings based on the performance analysis. Examples include activating specific network ports on a switch or implemeting bandwidth throttling for certain devices. Fault management - If a device or section of a network fails, an NMS may be able to automatically reroute traffic to limit downtime. This action may be performed on the fly or may be accomplished using a set of preconfigured rules. When a fault occurs, a network alert or notification is usually sent to one or more network administrators. Why use an NMS? In small networks, such as a residential or home network, an NMS is not necessary. However, it is important for any large network, such as those found in enterprise configurations. Server farms, data centers, and corporate networks may have hundreds or thousands of connected devices. It is important to have a central network monitoring system in place to manage the devices. An NMS provides and efficient way to locate, update, repair, and replace network equipment as needed. Updated April 6, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nms Copy

NNTP

NNTP Stands for "Network News Transfer Protocol." NNTP is the protocol used to connect to Usenet servers and transfer newsgroup articles between systems over the Internet. It is similar to the SMTP protocol used for sending email messages, but is designed specifically for newsgroup articles. NNTP enables both server/server and client/server communication. NNTP servers, for example, can communicate with each other and replicate newsgroups among each other. This is how the Usenet network is created and maintained. In the client-server model, NNTP allows a client system to connect to a Usenet server and provides commands for browsing and viewing articles on the server. Usenet was originally based on UUCP (Unix-to-Unix Copy), an early protocol used for file transfers between Unix systems. The only way to read articles via UUCP was to log into a Usenet server and read articles directly from the local disk. NNTP made it possible to access a Usenet server remotely from a client system running newsreader software. Examples of commands supported by the Network News Transfer Protocol include: ARTICLE - retrieve an article from a Usenet server GROUP - select a specific newsgroup IHAVE - tell the server the client has an article it may want LIST - retrieve a list of newsgroups available on the server NEWSGROUPS - receive a list of newsgroups created after a specific date and time NEWNEWS - receive a list of articles created after a specific date and time NEXT - go to the next message in the newsgroup POST - post a message or reply to an existing one In most cases, these and other NNTP commands are handled automatically by newsreader software. Newsreaders use these commands to communicate with Usenet servers and provide a graphical user interface GUI for browsing and viewing articles on the server. Updated March 21, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nntp Copy

NOC

NOC Stands for "Network Operations Center." A Network Operations Center (NOC) is a centralized location where IT professionals supervise, monitor, and maintain client networks, typically 24/7. It serves as the control hub of an organization's network infrastructure, helping ensure continuous data flow, server functionality, and uninterrupted network connections. Some NOCs are located within the data center they manage, while others are based in remote locations. The NOC team utilizes advanced monitoring tools to proactively identify and resolve network issues, maintaining optimal network health and maximizing uptime. NOC employees may also help with capacity planning, network optimization, and disaster recovery strategies. By analyzing traffic data, they can anticipate and mitigate potential network issues before they escalate. In an era where network reliability is crucial, NOCs are essential for managing traffic spikes, such as online sales promotions and live streaming of global sporting events. The network operations center is responsible for handling intense traffic loads, preventing downtime, and providing users with a fast and responsive online experience. Whether ensuring efficient operation during high-traffic periods or enabling smooth day-to-day connectivity, NOCs are the unsung heroes behind many of our uninterrupted digital experiences. Their proactive and comprehensive approach to network management is vital in a world where network performance directly impacts business success and customer satisfaction. Updated February 23, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/noc Copy

Node

Node A node is any computer or other device connected to a network that sends, receives, or redistributes data. For example, computers, file servers, network-connected printers, and routers are all nodes on a local area network. A typical network has two types of nodes — endpoints that both send and receive data, and redistribution points that direct data along its way to its destination. Most devices on a network, like computers, tablets, printers, and file servers, are endpoints where data starts or ends its journey. Redistribution points are network infrastructure devices like routers, wireless access points, modems, and switches that route data to its next stop, whether that is an endpoint, the next router on the network, or another stop on the Internet. Each node on a network has two addresses that identify it. Each device has its own MAC address, which it shares with the network router to uniquely identify itself. The router then assigns an IP address, which it uses to direct network traffic to the correct node. Updated December 6, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/node Copy

Node.js

Node.js Node.js is an open-source JavaScript runtime environment. It runs JavaScript code outside of a web browser, allowing web developers to use JavaScript as the back-end of their web applications. Node.js is also cross-platform and runs on Windows, Unix, Linux, and macOS. Before the introduction of Node.js, JavaScript could only run within a web browser. Websites could use JavaScript to add client-side interactivity on web pages and web applications, but the web server itself needed to use another language, like PHP, for its share of the work. Node.js allows JavaScript code to run server-side, which means that web developers can use one language for both the front- and backend of their web apps. Web browsers use their own JavaScript engines, while web servers use Node.js Node.js is popular among web developers for several reasons. First, it is efficient for applications that require many simultaneous connections, and can handle many requests at once. It also has a large ecosystem of open-source packages and libraries available through Node Package Manager, which allows developers to easily add new features by integrating existing modules into their applications. Finally, the ability to use one programming language instead of two reduces the complexity of a project and provides an easier learning curve. NOTE: Node.js uses Google's V8 JavaScript runtime engine, the same engine used by the Chrome web browser. Updated June 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/nodejs Copy

Non-Impact Printer

Non-Impact Printer Early printers, such as dot matrix and daisywheel printers were called impact printers, since they operated by striking an ink ribbon against the paper. Most modern printers, including inkjet and laser printers, don't include an ink ribbon and are considered to be non-impact printers. Non-impact printers are generally much quieter than impact printers since they don't physically strike the page. For example, inkjet printers spray tiny drops of ink onto the page, while laser printers use a cylindrical drum that rolls electrically charged ink onto the paper. Both of these methods are non-impact and provide an efficient printing process that produces little sound. The low impact nature of inkjet and laser printers also means they are less likely to need maintenance or repairs than earlier impact printers. Updated May 25, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nonimpactprinter Copy

Non-Volatile Memory

Non-Volatile Memory Non-volatile memory (NVM) is a type of memory that retains stored data after the power is turned off. Unlike volatile memory, it does not require an electric charge to maintain the storage state. Only reading and writing data to non-volatile memory requires power. Storage devices, such as HDDs and SSDs, use non-volatile memory since they must maintain their data when the host device is turned off. Hard disks (HDDs) store data magnetically, while (flash disks|flash memory) (SSDs) store data using memory cells in integrated circuits. Both can maintain their storage state for several years without power. Examples of non-volatile memory are listed below: Hard disk drive (HDD) Solid state drive (SSD) Flash drive (USB keychain) Optical media (CDs, DVDs, etc) Read-only memory (ROM) Since most storage devices need to maintain data without power, non-volatile memory is far more common than volatile memory. In computers, volatile memory is primary used for RAM and temporary cache storage. Updated October 17, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/non-volatile_memory Copy

Northbridge

Northbridge The northbridge is a chip on some computer motherboards that connects the CPU to other primary system components. A computer's northbridge directly links the CPU to the primary system memory and several high-speed expansion slots. It also connects to the southbridge, which provides connectivity to other components. Most computers in the 1990s and 2000s used a northbridge chip, but in the 2010s, CPU manufacturers began integrating memory and PCI Express controllers directly into the CPU to reduce latency. A computer's northbridge chip connects to the CPU using a high-speed connection called the front-side bus. From the northbridge, data travels between the CPU and the system RAM — for this reason, the northbridge is sometimes called the Memory Controller Hub (MCH). It also connects the CPU to high-speed PCI Express lanes designed for the system's GPU. Finally, it links the CPU to the southbridge chip that provides additional PCI Express lanes and communicates with the system's storage controllers, USB ports, networking interface, and other I/O components. The northbridge connects the CPU to the RAM and GPU, as well as to the southbridge for other I/O Advances in miniaturization and chip design now allow CPU makers to integrate the northbridge functions directly into the CPU itself. Removing the need for data to travel over the front-side and through the northbridge reduces latency and increases memory bandwidth. CPUs also incorporate the connection to several high-speed PCI Express lanes for the video card, or may even include the GPU itself. A separate chipset on the motherboard still provides other I/O connectivity, but the term "southbridge" is no longer used — it is instead called the Platform Controller Hub (PCH) or simply "the chipset." Updated December 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/northbridge Copy

NOS

NOS Stands for "Network Operating System" and is pronounced "N-O-S." A network operating system provides services for computers connected to a network. Examples include shared file access, shared applications, and printing capabilities. A NOS may either be a peer-to-peer (P2P) OS, which is installed on each computer, or a client-server model, where one machine is the server and others have client software installed. Peer-to-peer network operating systems include legacy OSes such as AppleShare and Windows for Workgroups. These operating systems offered unique networking capabilities that were not available in early versions of Mac OS and Windows. They enabled computers to recognize each other and share files over a cable connecting the machines. Over time, these networking features were integrated into standard operating systems, making P2P NOSes obsolete. Client-server network operating systems include Novell NetWare and Windows Server. These NOSes provide services from one computer to all connected machines. Novell NetWare requires specific client software to be installed on all client machines, while Windows Server works with standard Windows computers. In both cases, clients connect to the server and can access files and applications based on their access privileges. The central server manages all the connected machines and can provide updates as needed to the client systems. This makes it easy to keep all the computers on the network up-to-date. While client-server NOSes were used for several decades, they too have faded into obsolescence. Today, desktop operating systems have advanced networking capabilities, limiting the need for network operating systems. Additionally, many organizations now use intranets to provide web-based access to all local systems. Instead of requiring specific programs to be installed on each client, users can access web applications over a local network or the Internet. NOTE: A network operating system may also refer to a basic OS that runs on a network device, such as a router or firewall. Updated September 30, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nos Copy

NoSQL

NoSQL NoSQL is a non-relational database that stores and accesses data using key-values. Instead of storing data in rows and columns like a traditional database, a NoSQL DBMS stores each item individually with a unique key. Additionally, a NoSQL database does not require a structured schema that defines each table and the related columns. This provides a much more flexible approach to storing data than a relational database. While relational databases (like MySQL) are ideal for storing structured data, their rigid structure makes it difficult to add new fields and quickly scale the database. NoSQL provides an unstructured or "semi-structured" approach that is ideal for capturing and storing user generated content (UGC). This may include text, images, audio files, videos, click streams, tweets, or other data. While relational databases often become slower and more inefficient as they grow, NoSQL databases are highly scalable. In fact, you can add thousands or hundreds of thousands of new records to a NoSQL database with a minimal decrease in performance. Because of NoSQL's flexibility and scalability, many large businesses and organizations have begun using NoSQL databases to store user data. They are especially common in cloud computing applications and have become a most popular storage solution for big data. NOTE: NoSQL is sometimes referred to as "Not only SQL," though it is not the official meaning of the term. Updated August 27, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nosql Copy

NSP

NSP Stands for "Network Service Provider." An NSP is a business that provides access to the Internet backbone. While some ISPs also serve as NSPs, in most cases, NSPs provide Internet connectivity to ISPs, which in turn provide Internet access to customers. NSPs build and maintain the primary infrastructure of the Internet. This includes fiber optic lines between hubs or "Internet exchanges" that route Internet traffic around the world. These communication lines offer extremely high bandwidth of hundreds or even thousands of gigabits per second. The global network created by multiple NSPs enables data to flow seamlessly between computer systems around the world. In the United States, most NSPs are commercial entities, such as AT&T, Verizon, and MCI. In other countries, NSPs are often owned and operated by the government, though some international NSPs are privately owned as well. While NSPs and ISPs are usually separate companies in the United States, in other parts of the world, they are commonly a single entity. Examples include Tele2 in Europe, and Tata Communications in India. Updated March 7, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nsp Copy

NTFS

NTFS Stands for "New Technology File System." NTFS is a file system created by Microsoft and used by the Windows operating system. It was introduced with Windows NT and has served as the default file system for every version of Windows since Windows 2000. It replaced the FAT32 file system used by the earlier DOS-based versions of Windows like Windows 98. NTFS supports both hard disk drives and solid-state drives. NTFS includes several features that improve performance and reliability compared to FAT32. NTFS is a fault-tolerant file system that supports journaling, which saves records of changes to files and folders to a journal file that the file system can refer to in case of a power failure or system crash. For example, if the file system was moving several files when a power failure occurred, it can use the data in the journal to roll back in-progress changes and recover any lost data. NTFS also includes several features that help keep files and folders secure on multi-user systems. It uses access control lists (ACLs) to define what level of permissions each user and user group has for individual files and folders. It also supports quotas for individual users, limiting each account to a certain amount of disk space to prevent a single user from filling the entire disk. It also includes transparent file compression that saves disk space and supports full-disk encryption using BitLocker. While Windows supports NTFS natively and uses it as the default file system, other operating systems only include partial support. For example, macOS can mount and read an NTFS volume but cannot write to one without a third-party software utility. Linux and Unix support vary by distribution, often including full support for reading an NTFS volume and only partial support for writing to one. For this reason, other file systems like exFAT are more common on removable USB flash drives in place of NTFS. Updated July 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ntfs Copy

NTP

NTP Stands for "Network Time Protocol." NTP is a protocol used to synchronize computer clocks across multiple systems. It supports synchronization over local area networks and the Internet. Matching the timestamps of two or more systems may seem like a simple task, but it involves multiple steps. Since all networks have some amount of latency, the delay between the request and the response must be taken into account. NTP uses the client-server model and calculates the round-trip delay using four values: client request timestamp server reception timestamp server response timestamp client reception timestamp The time between 1 and 2 above is added to the time between 3 and 4 to compute the total round-trip delay. By subtracting half of this delay, it is possible to estimate the exact time on a remote server, usually within a few milliseconds. Since network conditions can affect the time it takes to transmit or receive NTP packets, a single request may not produce an accurate result. Therefore, it is common to make several NTP requests and average the latency to produce a more accurate timestamp. Timestamps may also be averaged across multiple computers to produce a consistent time for all machines on a network. When syncing multiple clocks at once, NTP is used as a peer-to-peer protocol, in which each system is a time source. NTP Examples The network time protocol is used by several different time-syncing utilities, including tools built into Windows and macOS. In Windows, the Date and Time Control Panel includes an "Internet Time" feature that uses NTP to retrieve the current time from a time server. In macOS, the Date & Time System Preference uses NTP to retrieve the current time when "Set date and time automatically" is checked. Both Windows and macOS use a simplified version of NTP called Simple Network Time Protocol (SNTP). Updated May 20, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ntp Copy

NUI

NUI Stands for "Natural User Interface." An NUI is a type of user interface that is designed to feel as natural as possible to the user. The goal of an NUI is to create seamless interaction between the human and machine, making the interface itself seem to disappear. A common example of a natural user interface is a touchscreen interface, which allows you to move and manipulate objects by tapping and dragging your finger(s) on the screen. The digital objects on the screen respond to your touch, much like physical objects would. This direct feedback provided by a touchscreen interface makes it seem more natural than using a keyboard and mouse to interact with the objects on the screen. Another modern example of an NUI is a motion-based video game. The Nintendo Wii, for instance, allows you to wave a controller in the air to perform actions on the screen. Microsoft's Xbox Kinect allows you to control your on-screen character by simply moving your body. Both of these motion-based interfaces are considered natural user interfaces since they respond to your natural motions. While touchscreens and motion-based games are two of the most common types of NUIs, several others exist as well. For example, a voice recognition interface like Apple's Siri assistant on the iPhone is considered a natural user interface since it responds to naturally spoken commands and questions. Virtual reality devices are NUIs, since they emulate a real world experience. Even some robots are considered natural user interfaces since they respond to human motion and spoken commands. Updated July 26, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nui Copy

Null

Null Null, in computing terms, refers to the absence of a value. It does not mean a value of 0, since 0 is itself a value, nor does it mean a blank space " ". It also does not refer to an undefined variable, which has not been created at all. In the context of programming languages, a variable that has no value assigned is considered null. It may have been created without a value with the intention of adding one to it later, or it may have originally had a value that has been removed. A null variable can cause errors when the program tries to do something with that variable, expecting it to have a value. In the context of a database, null is often used as a placeholder when a field was left blank, because that data was nonexistent or not available. For example, a database of customer information might include a column for a customer's phone number. If a customer doesn't share that information when the database entry is created, and nothing is entered into that field, the phone number value will be null. That does not mean that the customer does not have a phone number, but that it is unknown whether they do and what it is. If the customer shares it in the future, it can be added to the field, replacing null. Updated September 26, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/null Copy

Null Character

Null Character A null character is a special type of character, called a control character, with all its bits set to zero. It is often used in computer programming languages to mark the end of a string. In practical applications, such as database and spreadsheet programs, null characters are used as filler spaces. In most character encoding sets, such as UTF-8 and ASCII, the null character is the first character in the set. When represented in binary, it appears as all 0s, or 00000000. A null character in computer source code is often represented by the escape character sequence , and is used by programmers to manually set the end of a string. For example, a piece of programming code that stores the text string "Help" will store the following individual characters: "H" "e" "l" "p" "" The null character, , marks the end of the string. Since the null character is a single byte, it is more computationally efficient than using several bytes to set a length when the string is created. Updated September 30, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/nullcharacter Copy

Num Lock

Num Lock Num Lock is a key that is typically located near the keypad on the right side of a keyboard. Like Caps Lock, Num Lock is a toggle key, which toggles the input of certain keys, depending whether Num Lock is on or off. Num Lock is short for "Number Lock." It was implemented in early Windows keyboards to lock the input of the numeric keypad. When Num Lock is on, the keypad only enters numbers. When Num Lock is off, the keys may provide different input as indicated below: . - Delete 0 - Insert 1 - End 2 - Down Arrow 3 - Page Down 4 - Left Arrow 5 - Clear 6 - Right Arrow 7 - Home 8 - Up Arrow 9 - Page Up Since early PC keyboards did not have arrow keys, Num Lock gave the keypad a useful dual functionality. However, now that most keyboards have arrow keys and other special keys, such as Page Up and Page down, the Num Lock key is not often used. On some keyboards (especially Macintosh keyboards), the Num Lock is not functional. On others, it typically is set to ON by default. While you may not need to use the Num Lock key with your keyboard, it is helpful to know the key's function. If you find yourself trying to enter numbers with the numeric keypad and nothing appears on the screen, it is possible that Num Lock is OFF. If this is the case, pressing the Num Lock key should fix the problem. Updated October 15, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/numlock Copy

NVMe

NVMe Stands for "Non-Volatile Memory Express." NVMe is an internal storage device specification based on PCI Express. It provides a high-speed connection for solid state drives (SSDs), with more bandwidth and less latency than a SATA connection. NVMe drives look similar to RAM chips (memory modules), while SATA SSDs look more like traditional hard drives. The NVMe interface is simpler and more compact than a SATA or an mSATA connector since the drive controller is integrated into the drive. Western Digital NVMe SSD While NVMe is an open standard, specifications may differ between devices, such as PCIe version and interface type. For example, one NVMe drive may connect to an HHHL (Half Height, Half Length) PCIe 3.0 slot, while another may support PCIe 4.0 and connect an M.2 slot. NVMe vs SATA NVMeSATA Faster (3.5 GB/s read/write)Fast (600 MB/s read/write) Lower max capacity (4 TB)Higher max capacity (16 TB) Thin "card" form factorThicker rectangular "box" form factor Integrated controllerUses motherboard SATA controller More expensiveLess expensive NOTE: As of 2022, M.2 is the most common type of NVMe drive interface. Updated June 13, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nvme Copy

NVRAM

NVRAM Stands for "Non-Volatile Random Access Memory." NVRAM is a type of RAM that retains data after the host device's power is turned off. Two common types of NVRAM include SRAM and EEPROM. SRAM (pronounced "s-ram") retains data by using an alternative source of power such as a battery. SRAM is often used to store computer hardware settings that need to be maintained when the computer is shut down. Common examples include the BIOS settings on Windows computers or the PRAM settings on Macintosh systems. Since SRAM typically uses a battery to retain memory, if the battery dies or is disconnected, the data stored in the SRAM will be lost. Therefore, if BIOS or PRAM settings are not retained after a computer is restarted, it is likely the computer's battery has lost its charge and needs to be replaced. EEPROM (pronounced "e-e-p-rom") stores data using electrical charges that maintain their state without electrical power. Therefore, EEPROM does not need a battery or other power source to retain data. The most common type of EEPROM is flash memory, which is used for USB keychain drives and other portable electronic devices. Updated December 10, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/nvram Copy

Nybble

Nybble A nybble (sometimes spelled nibble) is a set of four bits. Since a full byte is eight bits, a nybble is half a byte. It is sometimes called a half-byte or tetrade; in the context of computer networking, it is also called a semi-octet, quadbit, or quartet. The four bits in a nybble provide a set of 16 possible values. A single nybble can be represented by one of the 16 hexadecimal digits (0-F); when shown this way, a nybble is called a hex digit. A full byte can be portrayed by a pair of hex digits (00-FF). The table below lists all 16 possible nybbles, along with the corresponding hex digit. Binary ValueHex DigitBinary ValueHex Digit 0000010008 0001110019 001021010A 001131011B 010041100C 010151101D 011061110E 011171111F Updated November 10, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/nybble Copy

OASIS

OASIS Stands for "Organization for the Advancement of Structured Information Standards." To someone backpacking through the Sahara, this is not the type of OASIS you want to see. But is it a welcome sight in the computer science world. OASIS is a non-profit, global consortium that supports the development and adoption of e-business standards. While it won't quench your thirst in the middle of the desert, OASIS does provide several useful technology standards. Common standards regulated by the OASIS consortium include protocols, file formats, and markup languages. Hardware and software companies often work with OASIS to develop and institute standards that are efficient and effective. The standards produced by OASIS are open standards, which means they can be used by any company or organization. This allows multiple companies to develop products based on the same standard, which offers a high degree of interoperability between different computer systems. For example, a file format standardized by OASIS may be supported by several different programs. Because each program can save files in the same format, the files can be opened by any of the programs without needing to be converted or translated. This makes transferring files between applications or even different systems a seamless process. Updated March 28, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/oasis Copy

Object

Object An object, in object-oriented programming (OOP), is an abstract data type created by a developer. It can include multiple properties and methods and may even contain other objects. In most programming languages, objects are defined as classes. Objects provide a structured approach to programming. By defining a dataset as a custom object, a developer can easily create multiple similar objects and modify existing objects within a program. Additionally, objects provide "encapsulation," meaning the data within an object is protected from being modified or destroyed by other functions or methods unless explicitly allowed. A simple example of an object may be a user account created for a website. The object might defined as class userAccount and contain attributes such as: first name last name email address password age location photo Instead of recreating these properties each time a new user account is created, a web script can simply instantiate a userAccount object. Data assigned to the object may be stored in a database if the user account is saved. A more advanced example of an object is a character in a video game. The character might have standard attributes, such as a name, hitpoints, and movement speed. It may also contain other objects, such as weapons, armor, items, etc. In this case, the character is the "parent object" and the objects it contains are "child objects." Both the parent and child objects can have their own properties and methods. For example, the character may have methods such as "move" and "attack." The "attack" command might reference the "weapon" object, which has its own methods, such as "swing" or "thrust." NOTE: While objects are usually associated with object-oriented programming, in general computer science terminology, an object may refer to a single programming element, such as a variable, constant, function, or method. Updated February 28, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/object Copy

Objective-C

Objective-C Objective-C is a general-purpose, high-level programming language. Created in 1984, it was the primary language for the NeXTSTEP operating system. Following Apple's purchase of NeXT, it became the primary language for writing apps for macOS and iOS. Apple created the Swift programming language as its replacement in 2014, but it is still widely used by developers on those platforms. Objective-C is an extension of the C programming language, adding object-oriented programming features. It uses nearly identical syntax to C for all non-object-oriented operations, like assigning variables and declaring functions. However, its object-oriented features borrow syntax from another programming language called Smalltalk. This syntax has objects communicate with each other by sending messages. Instead of calling on a predetermined method directly, as in other languages, the object instance that receives a message determines which method to use dynamically at runtime. For example, a message from one object to another in Objective‑C looks like this: [myObject sayHello] This code snippet sends a message to the object named myObject, telling it to perform a method called sayHello. If the class that myObject belongs to includes a method named sayHello, it runs that method. Otherwise, the app checks the superclass that contains myObject's class, going further up the hierarchy until it finds the method. Other programming languages, like C++ or Java, bind methods to objects at compile time — making the program faster since it already knows what method to run, at the cost of making it less flexible. Objective‑C does not include a standard library in the way that C++ does. Instead, all Objective‑C apps use one of several software frameworks containing default functions and code. For example, apps developed for macOS use the Cocoa framework, included in the Xcode SDK. Updated March 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/objective-c Copy

OCR

OCR Stands for "Optical Character Recognition." OCR is a technology that recognizes text within a digital image. It is commonly used to recognize text in scanned documents, but it serves many other purposes as well. OCR software processes a digital image by locating and recognizing characters, such as letters, numbers, and symbols. Some OCR software will simply export the text, while other programs can convert the characters to editable text directly in the image. Advanced OCR software can export the size and formatting of the text as well as the layout of the text found on a page. OCR technology can be used to convert a hard copy of a document into an electronic version (or soft copy). For example, if you scan a multipage document into a digital image, such as a TIFF file, you can load the document into an OCR program, which will recognize the text and convert the document to an editable text file. Some OCR programs allow you to scan a document and convert it to a word processing document in a single step. While OCR technology was originally designed to recognize printed text, it can be used to recognize and verify handwritten text as well. For example, postal services such as USPS use OCR software to automatically process letters and packages based on the address. The algorithm checks the scanned information against the database of existing addresses to confirm the mailing address. The Google Translate app includes OCR technology that works with your device's camera. It allows you to capture the text from documents, magazines, signs, and other objects and translate it to another language in real-time. Updated April 16, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ocr Copy

Octal

Octal Octal, also known as "base-8," is a number system that uses eight digits (0 - 7) to represent any integer. Octal values are sometimes used to represent data in computer science since bytes contain eight bits. For example, the octal value "10" can represent 8 bits or 1 byte. "20" represents 2 bytes, 30 represents 3, etc. Octal values are also easily translatable from binary, which uses two digits, and hexadecimal, which uses 16 digits. To convert an octal value to a standard decimal or "denary" value, multiply each digit by 8n, where n is the place of the digit, starting with 0, from right to left. Then add the results together. So 123 in octal can be converted to a decimal value as follows: 80 x 3 + 81 x 2 + 82 x 1 = 3 + 16 + 64 = 83 The table below shows several equal values in octal, decimal, hexadecimal, and binary: OctalDecimalHexadecimalBinary 1111 10881000 504028101000 10064401000000 123466829C1010011100 Binary to Octal Conversion To convert a binary value to an octal value, separate the digits into groups of three, starting from right to left. Then mulitply each 1 or 0 by 2n, where n is the place of each digit, from right to left, starting with 0. For example, to convert the binary value 1010011100 from the table above, first separate the digits as: 1 010 011 100. Then multiply the values as follows: 0x1 + 0x2 + 1x4 = 4 1x1 + 1x2 + 0x4 = 3 0x0 + 1x2 + 0x4 = 2 1x1 + 0x2 + 0x4 = 1 The resulting value is: oct 1234. Octal values may also be displayed with a subscript "8," such as 12348. The bottom row of the table above can be written in subscript notation as: 10100111002 = 12348 = 66810 = 29C16 Updated March 12, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/octal Copy

ODBC

ODBC Stands for "Open Database Connectivity." ODBC is an API that helps an application interact with many types of databases in a standardized and consistent way. It allows an application to use the same software code to transfer data to and from databases using multiple database management systems, regardless of which DBMS software the database uses. ODBC was first introduced by Microsoft and the SQL Access Group in 1992 and has served as a standard for interoperability ever since. Applications interact with a database through an ODBC driver, which translates the application's standard database calls into each database's unique language. The driver is also responsible for translating data returned by the database into an ODBC-compliant format and sending it to the application through the ODBC API. For example, an application can use the same commands to query a MySQL database, a Microsoft Access database, and an Oracle database through each of their ODBC drivers. The application's developer can support a variety of databases without writing separate calls for each system. Each separate database implementation uses a separate driver. Software utilities called ODBC Driver Managers help users configure each driver's settings and manage which ones they have installed. In addition to first-party drivers, users can install third-party drivers that provide access to databases that lack official ODBC support. Users can also install ODBC drivers for other kinds of databases that don't rely on DBMS software — for example, Excel spreadsheets and CSV text files — to help their applications interact with data from other sources. NOTE: ODBC is language-independent and is available in multiple programming languages. A similar API, Java Database Connectivity (JDBC), standardizes calls for database-connected Java applications. Updated August 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/odbc Copy

OEM

OEM Stands for "Original Equipment Manufacturer." An OEM is a company that manufactures or develops something that is sold by another company. In the computer world, it can refer to both hardware and software. Most computers contain components manufactured by multiple companies. For example, Dell does not make all the components in a Dell laptop, and Apple does not make all the components in an iMac. While these companies design the computers, they incorporate components from other manufacturers. For example, a Dell laptop might contain an AMD processor and a Samsung SSD. AMD would be the processor OEM, and Samsung would be the OEM of the storage device. An iMac might contain an Intel processor and Micron RAM. In this case, Intel and Micron are the respective OEMs. Knowing which OEMs provide components to your computer can be helpful when replacing or upgrading parts. It can help ensure you get as close of a match as possible to the original part. If you have had issues with a certain component, it might make sense to switch to another manufacturer. Just make sure the hardware specifications are supported by your system. You can look up OEM information in both Windows and macOS using the System Information app. In Windows, you can open this app by typing Msinfo32.exe in the "Run" field of the Start Menu. You can also locate the app in /ProgramsAccessoriesSystem ToolsSystem Information. In macOS, you can select "About this Mac" from the Apple menu, then click System Report... to open the System Information app. You can locate the app in /Applications/Utilities/System Information.app. OEM Software OEM software refers to programs that are bundled with a computer system. This includes applications and utilities that are developed by various software developers and installed on a system before it is sold. It can also refer to an operating system, such as Windows, that is installed on a PC. Users who build their own PCs often purchase an "OEM license" of Windows to install on their custom system. NOTE: The term OEM is used in many other industries as well. For example, auto manufacturers often include multiple OEM parts in their cars and trucks. For example, an Acura may have NGK spark plugs and Bose speakers, while an Audi might have Bosch spark plugs and Bang & Olufsen speakers. Updated April 28, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/oem Copy

Office

Office Microsoft Office is a productivity suite developed for Windows and Macintosh systems. It is available it several editions, each of which includes multiple applications. All Office editions include the three standard programs, Word, Excel, and PowerPoint. The first version of Microsoft Office was actually released for the Macintosh in 1989. The Windows version followed one year later in 1990. Over the past two decades, Microsoft has released new versions of Office roughly every two to three years. Starting with Office 95, Microsoft began offering multiple editions of Office, such as Standard, Professional, and Small Business editions. Newer versions of Office are available in "Home and Student" and Professional editions. Since the first version of Office was released, Microsoft has continued to developed Office for both the Windows and Macintosh platforms. However, beginning with Office 98 for Mac, Microsoft has developed distinctly different versions of Office for Mac and Windows. Therefore, while the programs perform the same functions, the user interface of Office programs on Mac and Windows systems may look different. Fortunately, most file formats saved by Office programs are crossplatform, meaning they can be opened on Macintosh or Windows computers. NOTE: Some programs that are included with professional editions of Office are not available for the Mac. These include Access, Publisher, InfoPath, Project, and Visio. Updated March 22, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/office Copy

Offline

Offline For a computer or other device, being offline means it is not connected to a network or the Internet. While a device is offline, it cannot send or receive data over the Internet, and applications that require an Internet connection may not function properly. A computer may go offline due to a technical problem with itself or other network devices. A computer's user may also choose to go offline intentionally to prevent it from sending and receiving data. Being offline may also more broadly refer to a device not being powered on, connected to other devices, and ready to be used. For example, when you try to print to a printer that is not plugged into power, you may get an error message saying that the printer is "offline." This does not mean that the printer needs an active Internet connection, but instead that it is simply not in a functional state. Some applications expect a constant connection to the Internet to check for new content or sync data to the cloud in the background. However, if you know that you won't be online for a while, some apps give you the option to "work offline" in an offline mode. Offline modes disable any functionality requiring an Internet connection until you choose to go back online. For example, setting an email client to work offline will stop it from automatically checking for new messages in the inbox; you can still write a message and send it to the outbox, where it will wait until you are back online. NOTE: Prior to the widespread adoption of always-on Internet connections (like DSL, cable, and mobile networks), you needed to dial-in to your ISP to actively connect to the Internet. This tied up your phone line, so you would end the call and go offline when you were finished using the Internet. Updated August 24, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/offline Copy

OLAP

OLAP Stands for "Online Analytical Processing." OLAP is a technology that helps users analyze data from various sources in a way that goes beyond two-dimensional tables. Unlike traditional databases that store data in rows and columns, OLAP organizes information into multiple dimensions. This multidimensional approach makes it possible to compare data in several different ways. For example, a business might want to compare sales figures for different months and locations to identify trends and patterns. An OLAP system uses one or more OLAP servers to structure and manage the data. These servers are equipped with functions that allow users to perform detailed analyses, such as calculating averages or identifying sales trends. Popular OLAP server software includes Oracle OLAP, Microsoft Analysis Services, and Hyperion Essbase. One of OLAP's main advantages is its ability to let users ask complex questions about their data. For example, you can zoom in on specific details or zoom out to see broader trends. This capability is helpful in various fields, such as financial reporting and market analysis, where it's important to understand general themes and specific events. Online Analytical Processing plays a key role in data mining, which involves finding hidden patterns and relationships in large datasets. By using OLAP, businesses can uncover insights that aren't immediately obvious, helping them to make more informed choices. It has also become a cornerstone of generative AI since it is an effective way to process the data used to train AI models. Updated July 30, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/olap Copy

OLE

OLE Stands for "Object Linking and Embedding." OLE is a framework developed by Microsoft that allows you to place an object from one application into a document in another app. Objects placed in documents are not static and can easily be updated later. A document with an object linked or embedded within it is known as a compound document. A compound document may contain linked objects and embedded objects. Linking an object creates a connection to another file, automatically pulling in its contents. Editing that linked object in another application will automatically update its appearance when viewed in the compound document. For example, if you create a link to an Excel spreadsheet within a PowerPoint presentation and then edit the spreadsheet later, the slide automatically includes the updated spreadsheet the next time you present the slideshow. Embedding an object instead places it within the compound document entirely. You can edit an embedded object directly within the compound document's application, using the editing tools that the app makes available. If it doesn't have the right tools you need to edit the object, you can always edit the original object's file and embed it again. Editing an embedded Excel spreadsheet in a Word document Microsoft first developed OLE technology alongside Windows 3.1 to link and embed objects in documents but later expanded the technology into the Component Object Model (COM). While OLE is exclusive to the Windows operating system, the COM framework is not tied to one specific operating system. It allows developers to create software modules that can interact with other applications and formed the basis for ActiveX technology, which allowed web developers to create interactive content for web pages. Updated November 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ole Copy

OLED

OLED Stands for "Organic Light Emitting Diode" and is pronounced "oh-led." OLED is a type of flat screen display similar to an LCD that does not require a backlight. Instead, each LED within an OLED panel lights up individually. An OLED screen has six layers that work together to produce color images. These layers include the following, from bottom to top: Substrate - the foundational structure that supports the panel; typically made out of glass or plastic Anode - a transparent layer that removes electrons when electrical current flows through it Conductive Layer - contains organic molecules or polymers such as polyaniline that transfer current to the emissive layer Emissive Layer - contains organic molecules or polymers such as polyfluorene that light up when current is passed through them Cathode - injects electrons into the other layers when current flows through it Cover - the top protective layer of the screen; typically made out of glass or plastic How does an OLED work? OLEDs display light using a process called electrophosphorescence. While this may sound like an intimidating term, the process is relatively simple. Electrical current flows from the cathode (negatively charged) to the anode (positively charged), causing electrons to move to the emissive layer. These electrons find "holes" (where atoms missing electrons) in the conductive layer and produce light when they fill these holes. The color of the light depends on the organic molecule that the current passed through in the emissive layer. Since the diodes in OLED displays light up individually, there is no need for a backlight. This means OLEDs can have darker blacks than LED/LCD displays and use less electricity. They are also thinner and may be curved or even bendable. While OLEDs have many advantages over LED/LCD displays, it has been expensive to produce large, reliable OLED screens. Therefore, OLEDs have been more common in small electronics, such as smartphones and tablets. As OLED production costs decrease and reliability increases, the technology will become more commonly used in larger screens, such as televisions and computer monitors. Updated September 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/oled Copy

Online

Online When a device is "online," it is powered on and connected to a network, other devices, or the internet. The term applies to various electronics, including computers, smartphones, smart TVs, gaming consoles, and IoT devices. Today, "online" most commonly describes a device that is connected to the internet. For two systems to communicate over the internet, both must be online. If you cannot access websites or check your email from your laptop, it may be offline. If you cannot access a specific website but others load correctly, the server that hosts the website may be offline. Online but not connected to the Internet Devices can appear online within a local network, but still not have internet access. For example, if you connect your laptop to a wireless router, the computer will indicate it is connected via Wi-Fi. However, if the router isn't hooked up to an active internet connection, you won't be able to access the internet. In this case, your device may be online in the context of the local network, but offline in terms of Internet access. Updated October 14, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/online Copy

ONT

ONT Stands for "Optical Network Terminal." An ONT is a peripheral device that provides a physical bridge between a fiber optic Internet connection and a local network. Much like a cable modem converts a digital signal into one that can travel over coax, an ONT converts the electrical signal of a local Ethernet network into an optical signal that travels over fiber optic lines. Once connected to a router, it provides a constant high-speed broadband Internet connection. An ONT has several ports on the back. It will take an incoming fiber optic cable that must be installed by a technician, due to the fragile nature of the glass inside the cable. It also has at least one RJ45 Ethernet port to connect the ONT to a router. Some ONTs also include one or more RJ11 telephone jacks to support VoIP service through the ISP. The back of an ONT, connected to Ethernet, fiber, and power Fiber Internet connections are one of the fastest types available, offering standard speeds up to 1 Gbps and up to 2 Gbps in certain areas. Unlike cable Internet, which offers fast download speeds but limited upload speeds, fiber Internet speeds are symmetrical; if you can download at 1 Gbps, you can upload at 1 Gbps. It is also uncommon for fiber ISPs to impose monthly data caps on their customers. NOTE: Unlike modems, which can be purchased separately and installed by the customer on their own, an ONT is considered property of the ISP and needs to be installed by a technician. Updated September 29, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ont Copy

OOP

OOP Stands for "Object-Oriented Programming." OOP (not Oops!) refers to a programming methodology based on objects, instead of just functions and procedures. These objects are organized into classes, which allow individual objects to be grouped together. Most modern programming languages, including Java, C++, and PHP, are object-oriented languages, and many older programming languages now have object-oriented versions. An "object" in an OOP language refers to a specific type, or "instance," of a class. Each object has a structure similar to other objects in the class, but can be assigned individual characteristics. An object can also call functions, or methods, specific to that object. For example, the source code of a video game may include a class that defines the structure of characters in the game. Individual characters may be defined as objects, which allows them to have different appearances, skills, and abilities. They may also perform different tasks in the game, which are run using each object's specific methods. Object-oriented programming makes it easier for programmers to structure and organize software programs. Because individual objects can be modified without affecting other aspects of the program, it is also easier to update and change programs written in object-oriented languages. As software programs have grown larger over the years, OOP has made developing these large programs more manageable. Updated November 2, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/oop Copy

Opacity

Opacity Opacity (pronounced "o-pass-ity," not o-pace-ity") describes how opaque an object is. While it is not specific to computer terminology, the term is often used in computer graphics software. For example, many programs include an "Opacity" setting that allows you to adjust the transparency of an image. To understand opacity, it is important understand what "opaque" means. An opaque object is completely impervious to light, which means you cannot see through it. For example, a car door is completely opaque. The window above the door, however, is not opaque, since you can see through it. If the window is tinted, it is partially opaque and partially transparent. The less transparent the window is, the higher its opacity. In other words, transparency and opacity are inversely related. Most digital images are 100% opaque. For example, if you open an image captured with a digital cameras, it will be completely opaque. However, you can use an image editing application to adjust the opacity of the image. This is especially useful when editing an image with multiple layers. For instance, Adobe Photoshop allows you to adjust the opacity for each layer from 0 to 100. 0 is completely transparent (or invisible), while 100 is completely opaque. By sliding the opacity of each layer somewhere in between 0 and 100, you can overlay multiple layers on top of each other, creating a multilayer image mosaic. NOTE: When saving a semi-transparent image, it is important to save the file in a format that supports multiple levels of opacity. JPEG does not support transparency at all and GIF only supports fully transparent or fully opaque pixels. The PNG format is the best choice since it includes an alpha channel, which stores an opacity setting for each pixel. Updated March 16, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/opacity Copy

Open Firmware

Open Firmware Open Firmware is a type of firmware that some computer systems use when they boot up. It controls the processor and performs system diagnostics before the operating system is loaded. Open Firmware also builds the "device tree," which locates internal and external devices connected to the computer. Each device is then assigned a unique address so it can be used once the computer starts up. Several types of computers use Open Firmware, including PowerPC-based Macintosh systems, Sun Microsystems SPARC-based workstations, and IBM POWER systems. (Most Windows-based PCs use the BIOS for the same purpose.) Because Open Firmware is an "open" standard, devices that support Open Firmware can be typically be used in multiple Open Firmware-based systems. For example, identical PCI cards could be used in both Sun and Macintosh-based computer systems. To access the Open Firmware interface on a PowerPC-based Macintosh, press and hold "Command-Option-O-F" during startup. On Sun systems, the Open Firmware interface is displayed at startup and can be accessed afterwards by pressing "L1-A" (or Stop-A) while the computer is running. Updated November 3, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/openfirmware Copy

Open Format

Open Format An open format is a file format with an openly-published specification that anyone can use. It is the opposite of a proprietary file format, which is only used by a specific software company or application. Open formats make it possible for multiple programs to open the same file. For example, if you create a plain text (.TXT) file in a text editor like Notepad and open it with another application like Microsoft Word. Since plain text files store data in an open format, they can be opened by any text editor or word processor. Conversely, a file saved in a proprietary format may only be opened by the corresponding application. For example, only Apple Pages can open a .PAGES document. Only Adobe Premiere can open a .PRPROJ file. In recent years, many software companies have moved from proprietary formats to open ones. Microsoft switched from proprietary Office file formats (.DOC, .XLS, .PPT) to open formats (.DOCX, .XLSX, .PPTX) in 2007. The new "Open XML" Office formats use standard Zip compression and contain data stored in plain XML. Any application that supports the publicly-available Open XML specification can open them. In 2008, Adobe made PDF an open standard. The International Organization for Standardization (ISO) now maintains the .PDF format specification. Partially Open Formats Some file formats are "partially open," meaning the specification is made partially available for public use. Other applications can open these files, but they may have limited editing and saving capabilities. Examples of partially open formats include Photoshop documents (.PSD), CorelDRAW drawings (.CDR), and AutoCAD drawings (.DWG). Updated November 21, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/open_format Copy

Open Source

Open Source Open-source software is software whose source code is freely available to the public. Anyone can download and run an open-source software project, view its source code to see how it works, or make changes to improve it. Many open-source software projects are a community effort maintained by a group of volunteer software developers instead of a single developer or organization. A project's developers provide a program and its source code to the public using one of several open-source software licenses, which lay out how other people may use that source code. There are two general types of open-source licenses — permissive (or academic) licenses and copyleft (or reciprocal) licenses. Permissive licenses allow you to use, modify, and distribute open-source code in another project as long as you provide a disclaimer that it includes open-source code and give credit to the authors of that code. Examples of this type of license include the BSD, MIT, and Apache licenses. Copyleft licenses allow you to use, modify, and distribute open-source code in a new project as long as you distribute that project using the same license terms. The GNU Public License (GPL) is the most common type of this license. "Copyleft" is a play on the word "copyright" that indicates a reversal of traditional copyrights. It uses the framework of copyright law to ensure that the work of the open-source software community remains open-source by preventing its use in proprietary, closed-source software. Unlike commercial software, whose development funding comes from software sales, open-source software projects are typically funded through donations of money (from interested donor organizations and community crowdfunding) and/or labor (by volunteer programmers). Documentation and user support also come from the project's community instead of paid support staff — if you need help with an open-source program, you can often find it on a project's forum, discussion group, or wiki page. NOTE: Some notable open-source software projects include the Linux operating system, the Firefox web browser, and the LibreOffice office software suite. Updated March 29, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/open_source Copy

OpenCL

OpenCL Stands for "Open Computing Language." OpenCL is an open standard for cross-platform, parallel programming. It was originally developed by Apple in 2008 and is now maintained by the Khronos Group. The first major operating system to support OpenCL was Snow Leopard (Mac OS X 10.6), which was released in 2009. OpenCL provides an API that allows software programs to access multiple processors simultaneously to perform parallel processing. Examples include CPUs, GPUs, digital signal processors (DSPs), and field-programmable gate arrays (FPGAs). By distributing the computing load across multiple processors, OpenCL increases processing efficiency and can substantially improve a program's performance. While OpenCL supports many different types of processors, it is most notably used to access the GPU for general computing tasks. This technique, also called GPGPU, takes advantage of the GPU's processing power and allows it to assist the CPU in completing calculations. Before OpenCL, the graphics processor would often remain idle while the CPU was running at full capacity. OpenCL enables the GPU to assist the CPU in processing non-graphics-related computations. In order to take advantage of OpenCL, both the hardware and software must support the OpenCL API. Because of the performance advantage OpenCL provides, most video cards developed by NVIDIA and AMD now support OpenCL. Many mobile graphics processors, such as those used in smartphones and tablets, support OpenCL as well. Updated June 26, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/opencl Copy

OpenGL

OpenGL OpenGL, short for "Open Graphics Library," is an application programming interface (API) designed for rendering 2D and 3D graphics. It provides a common set of commands that can be used to manage graphics in different applications and on multiple platforms. By using OpenGL, a developer can use the same code to render graphics on a Mac, PC, or mobile device. Nearly all modern operating systems and hardware devices support OpenGL, making it an easy choice for graphics development. Additionally, many video cards and integrated GPUs are optimized for OpenGL, allowing them to process OpenGL commands more efficiently than other graphics libraries. Examples of OpenGL commands include drawing polygons, assigning colors to shapes, applying textures to polygons (texture mapping), zooming in and out, transforming polygons, and rotating objects. OpenGL is also used for managing lighting effects, such as light sources, shading, and shadows. It can also create effects like haze or fog, which can be applied to a single object or an entire scene. OpenGL is commonly associated with video games because of its widespread use in 3D gaming. It provides developers with an easy way to create crossplatform games or port a game from one platform to another. OpenGL is also used as the graphics library for many CAD applications, such as AutoCAD and Blender. Even Apple uses OpenGL as the foundation of the macOS Core Animation, Core Image, and Quartz Extreme graphics libraries. NOTE: OpenGL was originally developed and released by Silicon Graphics (SGI) in 1992. The initial version was approved by an architectural review board, which included Microsoft, IBM, DEC, and Intel. In 2006, SGI passed the development and maintenance of OpenGL to The Khronos Group. Updated May 8, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/opengl Copy

Operand

Operand In mathematics, an operand is an object to which an operator (+, -, ×, ÷) is applied. For example, in the equation 7 + 5 = 12, the numbers 7 and 5 are operands. The plus symbol (+) is the operator and the value after the equal sign (12) is the result. In computer science, an operand is a value within a function. It may be a specific number or a variable. Operands can be defined within a function or passed into a function via one or more parameters. The function below includes three operands, including two variables and one constant. sumPlusTen(x, y) { const Z = 10; $sum = $x + $y + Z; return $sum; } In the example above, $x, $y, and Z are operands in the equation used to calculate $sum. The variables $x and $y are passed to the function sumPlusTen() as parameters, while the constant Z is defined within the function as 10. If sumPlusTen(5,6) is called with a program, it would calculate $sum using the operands 5, 6, and 10 and return a value of 21. Updated February 16, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/operand Copy

Operating System

Operating System An operating system (OS) is software that controls a computer's core functions. It communicates with the hardware and allows programs to run. The OS contains system software necessary for your computer to boot up and function and may include bundled applications like a web browser and text editor. Every computer, tablet, and smartphone has an operating system. The most popular operating systems for desktop and laptop computers are Windows, macOS, and Linux. These operating systems provide a graphical user interface, including a desktop environment that allows you to manage files and folders. You can also install and run applications from third-party developers. The computer's hardware determines what operating systems you can install. Apple computers only run macOS, while other PC hardware can run Windows, Linux, and other operating systems. Several windows open in the Windows 10 operating system Mobile devices like tablets and smartphones use dedicated mobile operating systems designed for use with a touchscreen. The two most popular mobile operating systems are Android and iOS. iOS is exclusive to iPhone, while a variation, iPadOS, runs on iPad. Android runs on smartphones and tablets from multiple manufacturers. The Home Screen of an iPhone running iOS 16 Each operating system supports a different application program interface (API), which developers can access when building applications. For example, the API provides prewritten functions for creating, saving, and deleting files. Before an app can run, the source code must be compiled for a specific operating system. While some applications are cross-platform and available for more than one operating system, many software developers support only a single OS to reduce the complexity of their project. There are many reasons someone may choose one operating system over another. Personal preference for a particular GUI may play a role. Some operating systems are well-suited for certain tasks. For example, Linux is ideal for running a server. Windows has the widest variety of software and games. macOS is known to be user-friendly and integrates well with iOS devices. Android is more customizable and has more hardware options than iOS. When choosing an operating system, make sure it runs the software you need and has a user interface you are comfortable using. Updated May 4, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/operating_system Copy

Operator

Operator An operator is a symbol within a mathematical expression that defines a specific operation. For example, the plus symbol (+) is an operator that represents addition. Common operators include: + (add) - (subtract) * or x (mulitply) / or ÷ (divide) % or mod (modulus) Consider the calculation: 2 + 3 x 4 = 14 In the expression above, + and x are operators, while the numbers 2, 3, and 4 are operands. 14 is the result (since the x operand takes precedence over the + operand). While operands are typically associated with mathematics, they also apply to computer science. For example, the function below contains two operators and three operands. halfPlusSeven(x) { $total = $x / 2 + 7; return $total; } The / and + symbols are operators, while the variable $x and the numbers 2 and 7 are operands. If x = 30, the expression will evaluate as (30 / 2) + 7, which will return a result of 22. Updated July 20, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/operator Copy

Optical Carrier

Optical Carrier High-speed fiber optic connections are measured in Optical Carrier or "OC" transmission rates. These rates include several standardized bandwidth amounts supported by Synchronous Optical Networking (SONET) connections. They are generically referred to as OCx, where the "x" represents a multiplier of the basic OC-1 transmission rate, which is 51.84 Mbps. The following is a list of standardized Optical Carrier (OC) data transmission rates. The "STM" numbers in parentheses are the OC equivalents defined in the Synchronous Digital Hierarchy (SDH). The STM numbers are commonly used to define the bandwidth supported by high-speed networking hardware. OC-1 (STM-0) - 51.84 Mbps OC-3 (STM-1) - 155.52 Mbps OC-9 (STM-3) - 466.56 Mbps OC-12 (STM-4) - 622.08 Mbps OC-18 (STM-6) - 933.12 Mbps OC-24 (STM-8) - 1244.16 Mbps OC-36 (STM-12) - 1866.24 Mbps OC-48 (STM-16) - 2488.32 Mbps OC-192 (STM-64) - 9953.28 Mbps OC-768 (STM-256) - 40 Gbps OC-3072 (STM-1024) - 160 Gbps As you can see from the list above, the number following "OC-" servers as a multiplier of the basic OC-1 rate of 51.84 Mbps. For example, OC-3 is 51.84 Mbps x 3, or 155.52 Mbps. OC-12 is 4 times that of the OC-3 rate (155.52 Mbps x 4), which is 622.08 Mbps. OC rates are used to measure speeds of high-speed optical networks, from local business-to-business connections, to the highest bandwidth connections used for the Internet backbone. Small and medium sized businesses that require high-speed Internet connectivity may use OC-3 or OC-12 connections. ISPs that require much larger amounts of bandwidth may use one or more OC-48 connections. Generally, OC-192 and greater connections are used for the Internet backbone, which connects the largest networks in the world together. Updated December 8, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/optical_carrier Copy

Optical Drive

Optical Drive An optical drive is a computer disc drive that uses a laser to read optical discs like CDs, DVDs, and Blu-rays. Optical drives are available in both internal models that are mounted inside a computer and connected directly to the motherboard, as well as external models in separate enclosures. Most optical drives can not only read optical discs, but can also write to writable and rewritable discs. The word "optical" means "relating to vision or light." In the case of optical drives and optical media, it refers to the laser that reads the underside of a spinning plastic disc, detecting a series of bumps and pits that contain encoded digital data. Optical drives that can write to discs include several lasers at different levels of power — a low-power laser that reads data, a medium-power laser that erases rewritable discs, and a high-power laser that burns data to a disc. A Lite-On optical DVD±RW drive Each new generation of optical media uses narrower lasers to read smaller bumps and pits — a CD contains pits 800 nm long, while a Blu-ray disc's pits are only 150 nm long. The speed at which an optical drive can read the pits on a disc depends on how fast the disc is spinning — a Blu-ray optical drive, for example, can spin a disc at more than 9,000 revolutions per minute while reading data. Even when a disc is spinning that fast, optical drives are still slower than hard drives and solid-state drives. However, since optical media is inexpensive and removable, it is most often used to distribute large amounts of data, particularly high-definition movies and console video games. Updated January 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/optical_drive Copy

Optical Media

Optical Media Optical media is a type of data storage that uses lasers to read data from a removable plastic disc. Most optical discs are written to once by stamping encoded data into the underside of a disc's metal coating. Recordable discs can be written to by a computer's optical drive, and special rewritable discs can be erased and rewritten multiple times. CDs, DVDs, and Blu-rays are the most common optical media formats, and are often used to distribute music, videos, games, and other software. Instead of storing data using magnetic charges, optical media encodes digital data using a series of bumps and pits on the underside of a disc's metallic coating. As the disc spins in the drive, a laser scans those bumps and pits and decodes the patterns into readable data. Each new optical media format shrinks the size of these bumps and pits, uses a narrower laser to read them, and spins the disc faster to provide enough bandwidth to stream the disc's contents. For example, a CD has pits that are 800 nm long and spins at around 250 revolutions per minute (RPM), while a Blu-ray disc has pits only 150 nm long and spins at more than 1,800 RPM. Verbatim Recordable DVD Like other types of data storage, optical media has some advantages and some drawbacks. The largest benefit is that optical media discs are inexpensive to manufacture, especially when pressing thousands of copies of the same disc. When properly stored in dry, stable conditions, optical discs have a lifespan of several decades. However, optical disc data transfer rates are slow compared to hard disk drives and SSDs. They can also be damaged by scratches to the top and bottom surface of the disc when not stored in a protective case. Updated September 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/optical_media Copy

Origin Server

Origin Server An origin server is the original source of content distributed by a content delivery network (CDN). Examples include webpages, images, CSS files, and streaming media. A CDN retrieves data from the origin server and replicates it on edge servers around the world. An origin server may be a dedicated or virtual web servers located at a web host. It is the physical server webmasters access when adding or updating website content and acts as the "source of truth" for the connected CDN. When a user accesses a resource not already cached on a specific edge node, the CDN will retrieve the page from the origin, then cache it on the corresponding edge server. If a website does not use a CDN, there is no need for an origin server since a single web server handles all incoming requests. Below are common origin server operations: Push - the origin server "pushes" content to CDN, replicating the content across multiple edge nodes Pull - the CDN "pulls" content from the origin server when it updates a resource or caches it for the first time Purge - the CDN removes an object from all edge servers, which is useful when updating a file on the origin server NOTE: Most CDNs can be configured with custom caching directives, such as max-age or expires, which indicate how frequently the CDN should check the origin server for an updated file. Updated April 27, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/origin_server Copy

OS X

OS X OS X is Apple's operating system that runs on Macintosh computers. It was first released in 2001 and over the next few years replaced Mac OS 9 (also known as Mac OS Classic) as the standard OS for Macs. It was called "Mac OS X" until version OS X 10.8, when Apple dropped "Mac" from the name. OS X was originally built from NeXTSTEP, an operating system designed by NeXT, which Apple acquired when Steve Jobs returned to Apple in 1997. Like NeXTSTEP, OS X is based on Unix and uses the same Mach kernel. This kernel provides OS X with better multithreading capabilities and improved memory management compared to Mac OS Classic. While the change forced Mac developers to rewrite their software programs, it provided necessary performance improvements and scalability for future generations of Macs. The OS X desktop interface is called the Finder and includes several standard features. OS X does not have a task bar like Windows, but instead includes a menu bar, which is fixed at the top of the screen. The menu bar options change depending on what application is currently running and is only hidden when full screen mode is enabled. The Finder also includes a Dock, which is displayed by default on the bottom of the screen. The Dock provides easy one-click access to frequently used applications and files. The Finder also displays a user-selectable desktop background that serves as a backdrop for icons and open windows. When you start up a Mac, OS X loads automatically. It serves as the fundamental user interface, but also works behind the scenes, managing processes and applications. For example, when you double-click an application icon, OS X launches the corresponding program and provides memory to the application while it is running. It reallocates memory as necessary and frees up used memory when an application is quit. OS X also includes an extensive API, or library of functions, that developers can use when writing Mac programs. While the OS X interface remains similar to the original version released in 2001, it has gone through several updates, which have each added numerous new features to the operating system. Below is a list of the different versions of OS X, along with their code names. Mac OS X 10.0 (Cheetah) Mac OS X 10.1 (Puma) Mac OS X 10.2 (Jaguar) Mac OS X 10.3 (Panther) Mac OS X 10.4 (Tiger) Mac OS X 10.5 (Leopard) Mac OS X 10.6 (Snow Leopard) Mac OS X 10.7 (Lion) OS X 10.8 (Mountain Lion) OS X 10.9 (Mavericks) OS X 10.10 (Yosemite) Updated December 31, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/os_x Copy

OSD

OSD Stands for "On Screen Display." Most monitors include an on screen menu for making adjustments to the display. This menu, called the OSD, may be activated by pressing the Menu button located on the side or front of your monitor. Once the OSD appears on the screen, you can navigate through the menu and make adjustments using the Plus (+) and Minus (-) buttons, which are usually located right next to the menu button. On screen displays vary between monitors, but most include the basic brightness and contrast controls. Some include more advanced color controls, allowing you to calibrate individual red, green, and blue (RGB) settings. Many monitors also support positioning adjustments, which can be used to make slight modifications to the position and tilt of the screen. Monitors that include built-in speakers may include audio adjustments as well. Most CRT and flat screen monitors, such as LCD and LED displays, include OSDs. However, flat screen displays typically have less adjustment options since their screen position is more consistent than older CRT monitors. Some newer monitors also allow users to make adjustments through a software interface rather than using the standard on screen display. Regardless of what monitor you have, it is good to be familiar with the OSD so you know how to adjust the display settings. Updated January 21, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/osd Copy

OSI Model

OSI Model The OSI (Open Systems Interconnection) model was created by the ISO to help standardize communication between computer systems. It divides communications into seven different layers, which each include multiple hardware standards, protocols, or other types of services. The seven layers of the OSI model include: The Physical layer The Data Link layer The Network layer The Transport layer The Session layer The Presentation layer The Application layer When one computer system communicates with another, whether it is over a local network or the Internet, data travels through these seven layers. It begins with the physical layer of the transmitting system and travels through the other layers to the application layer. Once the data reaches the application layer, it is processed by the receiving system. In some cases, the data will move through the layers in reverse to the physical layer of the receiving computer. The best way to explain how the OSI model works is to use a real life example. In the following illustration, a computer is using a wireless connection to access a secure website. The communications stack begins with the (1) physical layer. This may be the computer's Wi-Fi card, which transmits data using the IEEE 802.11n standard. Next, the (2) data link layer might involve connecting to a router via DHCP. This would provide the system with an IP address, which is part of the (3) network layer. Once the computer has an IP address, it can connect to the Internet via the TCP protocol, which is the (4) transport layer. The system may then establish a NetBIOS session, which creates the (5) session layer. If a secure connection is established, the (6) presentation layer may involve an SSL connection. Finally, the (7) application layer consists of the HTTP connection to the website. The OSI model provides a helpful overview of the way computer systems communicate with each other. Software developers often use this model when writing software that requires networking or Internet support. Instead of recreating the communications stack from scratch, software developers only need to include functions for the specific OSI layer(s) their programs use. Updated March 22, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/osi_model Copy

OSPF

OSPF Stands for "Open Shortest Path First." OSPF is a method of finding the shortest path from one router to another in a local area network (LAN). As long as a network is IP-based, the OSPF algorithm will calculate the most efficient way for data to be transmitted. If there are several routers on a network, OSPF builds a table (or topography) of the router connections. When data is sent from one location to another, the OSPF algorithm compares the available options and chooses the most efficient way for the data to be sent. This limits unnecessary delays in data transmission and prevents infinite loops. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ospf Copy

OTA

OTA Stands for "Over-The-Air." OTA refers to any type of wireless transmission, but it is most commonly used to describe either 1) software updates distributed to mobile devices or 2) TV and radio broadcasts transmitted over the air. 1) Mobile Device Programming OTA signals that are used to program, update, configure, and provision hardware devices are collectively known as OTAP (Over-The-Air Programming). One example is over-the-air activation (OTAA), which allows you to wirelessly activate a new cell phone by simply typing a code or following specific steps. This automated service provided by most mobile providers can save you a trip to a mobile phone service location when you buy a new phone. OTA transmissions can also send updates such as new carrier settings and even OS updates to supported smartphone. Another example is over-the-air service provisioning (OTASP), which is used to remotely activate and configure multiple devices, such as radio handsets, for a specific group of users. OTASP transmissions are typically sent out from a central server and may be encrypted to protect personal information. Both OTAA and OTASP are subsets of OTAP. 2) TV and Radio Signals OTA can also refer to television and radio transmissions that are sent and received over the air. Most often, OTA is used to contrast television channels that are broadcast over the air with those offered by cable service providers. OTA channels are local channels that can be received using a traditional "bunny ears" antenna. These types of channels are free and do not require a monthly subscription. Most cable providers include digital versions of OTA channels that are specific to each customer's region. NOTE: Even though satellite TV broadcasts are transmitted wirelessly, the channels offered by satellite providers are generally not considered "OTA." Instead, they are simply called "cable channels," since they are the same channels offered by cable companies. Updated October 20, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ota Copy

OTT

OTT Stands for "Over-the-top." OTT refers to data sent over the Internet that bypasses traditional mediums. Two common examples include OTT content and OTT messaging. Over-the-top Content Over-the-top content refers to movies and television shows that are delivered directly to users. Instead of requiring a cable or satellite television subscription, OTT content can be downloaded and viewed on demand. Popular OTT mediums include Netflix, Hulu, Amazon Prime and HBO Now. Free services like YouTube and Vimeo are also considered OTT, though they compete less directly with television providers. OTT services do not offer a list of channels. Instead, shows and movies must be selected individually and watched on demand. While traditional providers like Comcast and DIRECTV offer on demand content as well as live shows, their monthly fees are typically several times those of OTT providers. Because OTT service is relatively cheap ($8 - $15 per month), many users purchase multiple OTT subscriptions as an alternative to cable or satellite TV. Over-the-top Messaging For many years, text messaging (or SMS messaging) was the standard way to message someone using a mobile phone. In 2009, WhatsApp introduced a free way to message other users without using SMS. Apple introduced iMessage in 2011, which uses Apple's own free messaging service rather than SMS or MMS. As these alternative messaging options grew in popularity, many cellular service providers increased their monthly messaging limits for users and many plans now offer unlimited messaging. NOTE: Both OTT content and messaging still require internet access, such cable, DSL, or a cellular data connection. Therefore, many Netflix subscribers still use Comcast for their Internet connection. Updated March 18, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ott Copy

Outbox

Outbox An outbox is where outgoing e-mail messages are temporarily stored. While you are composing a message, most mail programs automatically save a draft of your message in the outbox. The message is then stored in the outbox until it is successfully sent to the recipient. Once the message has been sent, most e-mail programs move the message to the "Sent" or "Sent Messages" folder. While the terms "Outbox" and "Sent Messages" are often used synonymously, technically they have different meanings. Unlike the inbox, which is often overflowing with e-mail, the outbox often does not contain any messages. This is because all the messages that have been sent have already been transferred to the Sent Messages folder. You can think of an e-mail outbox much like the outbox at an office. Mail that is to be delivered is temporarily placed in the outbox until the mailman (or the designated office mail guy) picks up the mail and brings it to the post office. However, the messages in an e-mail outbox are typically delivered immediately (unless a connection to the outgoing SMTP mail server is not available). If only it was as easy to keep your inbox clean... Updated July 12, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/outbox Copy

Output

Output Data generated by a computer is referred to as output. This includes data produced at a software level, such as the result of a calculation, or at a physical level, such as a printed document. A basic example of software output is a calculator program that produces the result of a mathematical operation. A more complex example is the results produced by a search engine, which compares keywords to millions of pages in its Web page index. Devices that produce physical output from the computer are creatively called output devices. The most commonly used output device is the computer's monitor, which displays data on a screen. Devices such as the printer and computer speakers are some other common output devices. The opposite of output is input, which is data that is entered into the computer. Input and output devices are collectively referred to as I/O devices. Updated December 12, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/output Copy

Output Device

Output Device Any device that outputs information from a computer is called, not surprisingly, an output device. Since most information from a computer is output in either a visual or auditory format, the most common output devices are the monitor and speakers. These two devices provide instant feedback to the user's input, such as displaying characters as they are typed or playing a song selected from a playlist. While monitors and speakers are the most common output devices, there are many others. Some examples include headphones, printers, projectors, lighting control systems, audio recording devices, and robotic machines. A computer without an output device connected to it is pretty useless, since the output is what we interact with. Anyone who has ever had a monitor or printer stop working knows just how true this is. Of course, it is also important to be able to send information to the computer, which requires an input device. Updated May 13, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/outputdevice Copy

Overclocking

Overclocking Overclocking is the process of increasing the clock speed of a processor beyond its stock setting. It can increase a processor's performance, but at the cost of increased power usage, heat generation, and the possibility of system instability or physical damage. Power users who enjoy tinkering with their computers often overclock their CPUs (and sometimes their GPUs and RAM) to get the most speed out of their hardware. In order to overclock a processor, it must have an unlocked frequency multiplier (also called a clock ratio). Intel processors whose product numbers end in "K" are unlocked, as are most AMD processors. It also requires a motherboard whose chipset allows you to modify frequency multipliers and voltage settings; you can change these settings in the BIOS / UEFI, or within third-party software available for Windows. Overclocking options on an Aorus motherboard Overclocking involves managing three variables — the clock speed frequency multiplier, voltage, and heat. The first step is to adjust the processor's frequency multiplier, which multiplies the motherboard's base clock (typically set at 100 MHz) to get the processor's effective clock speed. For example, a 3.5 GHz processor starts with a frequency multiplier of 35, and by adjusting that multiplier to 36, then 37, then 38, you can incrementally increase its clock speed. At some point, this will cause system instability or boot failure; you can then roll back the frequency multiplier to the highest stable setting. You can reach higher clock speeds by increasing the voltage supplied to the processor. However, increasing the voltage also increases the heat the processor produces, which needs to dissipate before it overheats the computer. It often requires more robust cooling than the stock heat sink and fan packaged with a processor. Most overclockers upgrade to a third-party HSF or install a liquid cooling system to manage a processor's heat. Increasing the voltage too far will cause permanent damage to the processor (which is not covered by manufacturer warranties), so exercise caution when overclocking. NOTE: Modern processors often include features that automatically increase their clock speed when additional power is needed, then lower it back to the default after the need has passed. This feature, called Turbo Boost on Intel processors and Precision Boost on AMD processors, provides some of the benefits of overclocking automatically without the extra configuration and risk. Updated October 25, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/overclocking Copy

Overwrite

Overwrite In computing, overwriting refers to replacing old data with new data. There are two primary types of overwriting: 1) replacing text, and 2) replacing files. 1) Replacing text The default behavior of most word processing programs is to insert characters where the cursor is located. However, some programs allow you to change the standard behavior from insert to overwrite (or "overtype"). If an application supports both modes, the Insert (INS) key can often be used to toggle between insert and overwrite mode. While in insert mode, text to the right of the cursor is shifted to the right as new text is entered. For example, say you want to add the word "and" between "five" and "six," in the string "four, five, six." You would move the cursor immediately before the word "six," then type "and" (followed by a space). The result would be "four, five, and six." In overwrite mode, the word "six" would be overwritten by the word "and," so the resulting string would read, "four, five, and." Overwrite mode simply replaces existing characters as you type. 2) Replacing files The term "overwrite" also refers to replacing old files with new ones. If you try to save a document with the same filename as an existing document, you may be asked if you want to overwrite the file. If you click OK, the old document will be overwritten by the new one. Similarly, when moving files to a folder, the operating system may ask you if you would like to overwrite existing files with the same filenames. If you choose select Overwrite, the old files will be replaced by the new ones. Updated February 29, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/overwrite Copy

P2P

P2P Stands for "Peer to Peer." A P2P network connects computer systems to each other, over a local network or the internet, without a central server. P2P networks use distributed applications to split tasks between multiple computers. File-sharing networks, video chat clients, and multiplayer gaming commonly use P2P networks to connect computers together. Each computer or node on a P2P network is an equal peer to the rest. Unlike in the client-server model, every computer in a P2P network acts as both a client and a server. P2P networks are highly scalable since they do not use a central server that would require additional capacity for more clients. Having no central server also means that P2P networks are resilient and difficult to take down. P2P networks have several common uses: File sharing is the most common use, hosting files on many computers spread across a network instead of a single file server. Distributed computing (also known as grid computing) delegates processing tasks to any computer on the network, often for scientific applications like modeling molecular structures. Video and voice chat applications often use a central server to establish connections, then switch to a P2P network connection to allow two or more computers to send video and audio directly. Games often use P2P connections for multiplayer gaming to reduce lag and improve performance. P2P File Sharing P2P file-sharing networks like BitTorrent allow a computer to download a file from any other computer sharing the file. They may even download a file from multiple computers simultaneously to maximize available bandwidth. After completing the download, the P2P application makes it available to upload to other computers. Napster popularized P2P file-sharing in 1999, allowing anyone to share their music library with every other computer on the network. Other file-sharing networks, like Gnutella, allowed users to share more than just music. These networks made music and software piracy easy but were also used to spread malware disguised as music, movies, and software. Updated April 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/p2p Copy

PaaS

PaaS Stands for "Platform as a Service." PaaS is a type of cloud computing that provides a remote computing and development platform as an on-demand service. PaaS providers supply remote hardware infrastructure (servers, data storage, virtual machines, and network connections) and a software platform (an operating system, development tools, and database software). It allows businesses to build and run software on remote servers without running their own data center, server hardware, or software platform. PaaS is effectively the middle ground between Infrastructure as a Service (IaaS) and Software as a Service (SaaS). It includes the remote hardware provided by IaaS deployments and some software development tools and middleware, but not the full software stack provided by SaaS. This setup frees the customer from the responsibility of deploying and licensing the platform and tools, letting them focus on developing and deploying the software they want to run. IaaS, PaaS, and SaaS provide different levels of service Like other cloud services, PaaS providers offer clients the flexibility to add or remove capacity when necessary. It saves small and medium businesses from the expense of running their own data centers, and even large enterprises can use PaaS products to supplement on-site servers. Since cloud products are scalable, customers can order the capacity they need to get started and add more as demand increases. PaaS providers can also place their data centers in multiple locations globally to provide redundancy and traffic balancing for their clients. Updated February 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/paas Copy

Packet

Packet A packet is a small amount of data sent over a network, such as a LAN or the Internet. Similar to a real-life package, each packet includes a source and destination as well as the content (or data) being transferred. When the packets reach their destination, they are reassembled into a single file or another contiguous block of data. While the exact structure of a packet varies between protocols, a typical packet includes two sections — a header and payload. Information about the packet is stored in the header. For example, an IPv6 header includes the following fields: Source address (128 bits) - IPv6 address of the packet origin Destination address (128 bits) - IPv6 address of the packet destination Version (4 bits) - "6" for IPv6 Traffic class (8 bits) - priority setting for the packet Flow label (20 bits) - optional ID that labels the packet as part of a specific flow; used to distinguish between multiple transmissions from a single origin Payload length (16 bits) - size of the data, defined in octets Next header (8 bits) - ID of the header following the current packet; may be TCP, UDP, or another protocol Hop limit (8 bits) - maximum number of network hops (between routers, switches, etc) before the packet is dropped; also known as "TTL" in IPv4 The payload section of a packet contains the actual data being transferred. This is often just a small part of a file, webpage, or other data transmission, since individual packets are relatively small. For example, the maximum size of an IP packet payload is 65,535 bytes, or 64 kilobytes. The maximum size of an Ethernet packet or "frame" is only 1,500 bytes or 1.5 kilobytes. Packets are intended to transfer data reliably and efficiently. Instead of transferring a large file as a single block of data, sending smaller packets helps ensure each section is transmitted successfully. If a packet is not received or is "dropped," only the dropped packet needs to be resent. Additionally, if a data transfer encounters network congestion due to multiple simultaneous transfers, the remaining packets can be rerouted through a less congested path. Updated May 31, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/packet Copy

Page Fault

Page Fault A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM. The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM. Though the term "page fault" sounds like an error, page faults are common and are part of the normal way computers handle virtual memory. In programming terms, a page fault generates an exception, which notifies the operating system that it must retrieve the memory blocks or "pages" from virtual memory in order for the program to continue. Once the data is moved into physical memory, the program continues as normal. This process takes place in the background and usually goes unnoticed by the user. Most page faults are handled without any problems. However, an invalid page fault may cause a program to hang or crash. This type of page fault may occur when a program tries to access a memory address that does not exist. Some programs can handle these types of errors by finding a new memory address or relocating the data. However, if the program cannot handle the invalid page fault, it will get passed to the operating system, which may terminate the process. This can cause the program to unexpectedly quit. While page faults are common when working with virtual memory, each page fault requires transferring data from secondary memory to primary memory. This process may only take a few milliseconds, but that can still be several thousand times slower than accessing data directly from memory. Therefore, installing more system memory can increase your computer's performance, since it will need to access virtual memory less often. Updated June 2, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/page_fault Copy

Page Layout

Page Layout Page layout refers to the arrangement of text, images, and other objects on a page. The term was initially used in desktop publishing (DTP), but is now commonly used to describe the layout of webpages as well. Page layout techniques are used to customize the appearance of magazines, newspapers, books, websites, and other types of publications. The page layout of a printed or electronic document encompasses all elements of the page. This includes the page margins, text blocks, images, object padding, and any grids or templates used to define positions of objects on the page. Page layout applications, such as Adobe InDesign and QuarkXpress, allow page designers to modify all of these elements for a printed publication. Web development programs, such as Adobe Dreamweaver and Microsoft Expression Studio allow Web developers to create similar page layouts designed specifically for the Web. Since there are many applications that create customized page layouts, there is also a specific file format category for page layout file types. These files are similar to word processing documents, but may contain additional page formatting information and other types of visual content. You can view a list of Page Layout File Types at FileInfo.com. Updated December 9, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/page_layout Copy

Page Orientation

Page Orientation Page orientation is the way that a rectangular page is displayed or printed. There are two common page orientations — landscape orients the page so that it is wider than it is tall, while portrait lays it out so that it is taller than it is wide. Page orientation and page size are the two primary considerations when designing the page layout of a document. Word processors and desktop publishing software allow the user to choose whether to design a document for portrait or landscape — some even allow both portrait and landscape page layouts in the same document. The default choice for printing a document is portrait mode, which is easier for most people to read. However, even if the text editor does not support page orientation choices when designing a document, the option is almost always available when the document is sent to the printer in an application's Print dialog box. Portrait layouts (left) are tall, while landscape layouts (right) are wide The terms "landscape" and "portrait" come from the art world, where a landscape painting is wide to capture the horizon while a portrait is tall to depict a person's face and upper body. Televisions and most computer monitors are oriented in landscape, while most printed documents are in portrait. Smartphones, tablets, and some computer monitors can rotate to switch between landscape and portrait. Updated November 11, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pageorientation Copy

Page View

Page View Each time a user visits a Web page, it is called a page view. Page views, also written "pageviews," are tracked by website monitoring applications to record a website's traffic. The more page views a website has, the more traffic it is receiving. However, since a page view is recorded each time a Web page is loaded, a single user can rack up many page views on one website. Therefore, unique page views are commonly tracked to log the number of different visitors a website receives in a given time period. Page views are commonly confused with website hits. While people often use the term "hit" to describe a page view, technically a hit is recorded for each object that loads during a page view. For example, if a Web page contains HTML, two images, and a JavaScript reference, a single page view will record four hits. If a page contains over two hundred images, one page view will record over two hundred hits. Page views are more similar to impressions, which are commonly tracked by online advertisers. Page views and impressions may be identical if one advertisement is placed on each page. However, if multiple ads are positioned on each page, the number of ad impressions will be greater than the number of page views. Updated October 26, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pageview Copy

PAN

PAN Stands for "Personal Area Network." A PAN is network of connected devices used by one person. It allows devices such as computers, tablets, smartphones, and smartwatches to communicate with each other. A PAN may incorporate a number of different connections, including Ethernet, Wi-Fi, and Bluetooth. For example, a desktop computer may connect to a personal router via Ethernet and a tablet may connect via Wi-Fi. A smartphone may communicate with a computer via Wi-Fi and a smartwatch via Bluetooth. "Smartphone tethering" is a common type of PAN, in which a laptop or other device connects to the Internet through a smartphone's cellular data connection. If your mobile plan allows it, you can set up your smartphone as a "mobile hotspot" or "personal hotspot," which makes it function like a wireless router that is connected to the Internet. You can then connect to the smartphone from your computer or tablet like you would connect to a Wi-Fi router. Tethering may take place over Bluetooth, Wi-Fi, or USB depending on the device. Since PANs are designed to be limited to a single user, they are often secured automatically by only accepting connections from authorized devices. However, it is wise to double-check your connection settings to make sure that a password is required or that only authorized devices can connect, rather than "everyone." Updated November 9, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pan Copy

Parallel Port

Parallel Port A parallel port is an external interface commonly found on PCs from the early 1980s to early 2000s. It was used to connect peripheral devices such as printers and external storage devices. It was eventually superseded by USB, which provides a smaller connection and significantly faster data transfer rates. The parallel port is a hallmark of old computer technology — large in size and slow in speed. A standard parallel port connector has two rows of 25 total pins surrounded by a metal casing. It is roughly an inch in width and has two screw-in connectors to keep the cable in place. Parallel port cables used for printing often have an even larger 36-pin "Centronics 36" connector that connects to the printer. The original parallel port standard was unidirectional and could transmit data at a maximum speed of 150 kbps. As printers became more advanced, it was necessary to increase the connection speed and also provide bidirectional communication. Instead of the PC sending a "print" command and hoping the print job was successful, the bidirectional capability allowed printers to send messages back to the PC, such as "ready," "printing," and "complete." Faster transmission speeds enabled the parallel port to be used for other purposes, such as external storage devices like the Iomega Zip drive. IEEE 1284 The parallel port was eventually standardized by the IEEE as "IEEE 1284." This standard defined new versions of the parallel port, including the Enhanced Parallel Port (EPP) and the Extended Capability Port (ECP). EPP could transmit data up to 16 Mbps (2 MB/s). ECP could achieve data transfer rates close to 20 Mbps or 2.5 MB/s via an ISA bus. While the first version of USB was not much faster than a parallel port, it offered other improvements, such as a smaller connector and the ability to send electrical voltage to power an external device. It also was hot-swappable, meaning a USB device could safely be connected or disconnected while a computer was running. Connecting or disconnecting a peripheral via an IEEE 1284 connection could damage the device or the PC. The introduction of USB 2.0, which provided data transfer rates of 480 Mbps, made the parallel port obsolete, and the IEEE 1284 standard faded into computer history. Updated June 6, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parallel_port Copy

Parameter

Parameter In computer programming, a parameter or "argument" is a value that is passed into a function. Most modern programming languages allow functions to have multiple parameters. While the syntax of a function declaration varies between programming languages, a typical function with two parameters may look something like this: function graphXY(x, y) {   ... } When this function is called within a program, two variables should be passed into the function. It may look something like this: $horizontal = 22; $vertical = 40; graphXY($horizontal, $vertical); In this example, the values 22 and 40 (stored in the variables $horizontal and $vertical respectively) are the "input parameters" passed into the graphXY() function. When a function has input parameters, the output of the function is often affected by the values that are passed into the function. Therefore, a single function can be called multiple times within a program and produce different output each time. Updated August 7, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parameter Copy

Parity

Parity Parity is a mathematical term that defines a value as even or odd. For example, the number 4 has an even parity, while the number 5 has an odd parity. When even and odd values are compared, such as 4 and 5, they are considered to have different parity. If two even or odd values are compared with each other, they have the same parity. In computer science, parity is often used for error checking purposes. For example, a parity bit may be added to a block of data to ensure the data has either an even or odd parity. This type of error detection is used by various data transmission protocols to ensure that data is not corrupted during the transfer process. If the protocol is set to an odd parity, all packets received must have an odd parity. If it is set to even, all packets must have an even parity. Otherwise, a data transmission error will occur and the corresponding packet(s) will need to be resent. Parity is also used in a type of computer memory called parity RAM. This type of RAM stores a parity bit with each byte of data to validate the integrity of each byte. Therefore, 9 bits of data are required for every byte stored in the RAM. While parity RAM was commonly used in early computers, memory has become more reliable and therefore most systems now use non-parity RAM. High-end workstations and servers, which require consistent data integrity, typically use ECC RAM, which provides a more advanced means of error checking than standard parity RAM. Updated March 31, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parity Copy

Parity Bit

Parity Bit A parity bit is a bit, with a value of 0 or 1, that is added to a block of data for error detection purposes. It gives the data either an odd or even parity, which is used to validate the integrity of the data. Parity bits are often used in data transmission to ensure that data is not corrupted during the transfer process. For example, every 7 bits of data may include a parity bit (for a total of 8 bits, or one byte). If the data transmission protocol is set to an odd parity, each data packet must have an odd parity. If it is set to even, each packet must have an even parity. If a packet is received with the wrong parity, an error will be produced and the data will need to be retransmitted. The parity bit for each data packet is computed before the data is transmitted. Below are examples of how a parity bit would be computed using both odd and even parity settings. Odd parity: Initial value: 1010101 (four 1s) Parity bit added: 1 Transmitted value: 10101011 Result: Odd parity (five 1s) Even parity: Initial value: 1010101 (four 1s) Parity bit added: 0 Transmitted value: 10101010 Result: Even parity (four 1s) The value of the parity bit depends on the initial parity of the data. For example, the binary value 10000000 has an odd parity. Therefore, a 0 would be added to keep the parity odd and a 1 would be added to give the value an even parity. While parity checking is a useful way validating data, it is not a foolproof method. For instance, the values 1010 and 1001 have the same parity. Therefore, if the value 1010 is transmitted and 1001 is received, no error will be detected. This means parity checks are not 100% reliable when validating data. Still, it is unlikely that more than one bit will be incorrect in a small packet of data. As long as only one bit is changed, an error will result. Therefore, parity checks are most reliable when using small packet sizes. Updated March 31, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parity_bit Copy

Parked Domain

Parked Domain A parked domain is a registered domain name that is active but is not associated with a specific website. Instead, a parked domain displays a "parked page" to users who visit the URL. A typical parked page has a simple layout with a list of links related to the domain name. These links may generate advertising revenue for the parked domain's owner when users click on them. Domain parking is done for several different reasons. Some people register domain names hoping to generate "type-in" traffic for common words and phrases. If enough people visit the parked pages on a daily basis, the owners can earn consistent revenue from the clicks on the links or ads. Domainers or "cybersquatters" register large numbers of domain names as an investment, hoping to sell them to interested buyers in the future. The parked pages for these domains may contain advertisements, but they also include a contact link that visitors can use to make an offer for the domain name. Many web hosts display parked pages by default when a domain name has been registered, but a website has not yet been published by the owner. These pages often state the website is under construction or coming soon and may also include advertisements. The advertising links on these temporary parked pages generate revenue for the hosting company rather than the owner of the domain name. In some cases, a webmaster may register domain names similar to his or her primary domain name to prevent competitors from using them. These domain names often display parked pages by default. The webmaster may also choose to forward the domains to the primary site. If the domains are redirected, they are no longer parked domains but rather "forwarders" that direct users to a different URL. Updated September 18, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parked_domain Copy

Parse

Parse Parsing is the process of analyzing a sequence of symbols, such as source code or text, to extract meaningful information. In computer science, parsing is essential to many computational tasks, especially those involving programming languages and data processing. Software Programming In the context of programming, parsing breaks down source code written in languages like Python and JavaScript into a structured format that can be understood and processed by a compiler or interpreter. This structured format, often represented as a "parse tree," allows the system to validate syntax and identify errors. For instance: Compilers parse code during the compilation process to check the code's syntax before compiling it into an executable program. Interpreters parse scripts at runtime and execute the code dynamically. PHP code that parses the value $name and produces the appropriate output Web servers parse scripts written in languages such as PHP, Ruby, or Node.js. When server-side scripts are executed, it dynamically generates HTML content sent to users' browsers. Data Processing Parsing also plays a significant role in text and data processing tasks. For example: Search engines parse user queries to identify keywords, understand the user's intent, and deliver relevant results. Data extraction tools parse text documents, extracting key information like names, dates, and locations. Spreadsheet applications parse imported data to convert unstructured text into organized tables. Structured data parsers process data formats like JSON and XML commonly used in web APIs. Artificial intelligence applications also parse large amounts of data. For example, machine learning improves efficiency by identifying patterns in large datasets. Natural Language Processing (NLP) allows chatbots and translation apps to understand spoken languages. Whether processing a simple text string or millions of lines of code, parsing data is a foundational step in computer processing. Updated December 23, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/parse Copy

Partition

Partition A partition is a section of a storage device, such as a hard disk drive or solid state drive. It is treated by the operating system as a separate logical volume, which makes it function similarly to a separate physical device. A storage device may be formatted with one or more partitions. Some operating systems, such as Windows and Linux require multiple partitions, while others, like macOS may only require one. Windows stores system files in a "System Partition" and user data files in a data partition. Some Windows drives may also include a "Recovery Partition," which stores files used by the Windows Recovery Environment (WinRE). This partition is used to repair problems that prevent the operating system from booting. When formatting a storage device with a disk utility, you can either use the default partition scheme or create a custom one. For example, you could format a 2 TB hard drive with three partitions — a 300 GB partition used for the startup disk, a 700 GB partition for your photo library, and a roughly 1 TB partition for the rest of your data. These partitions would each appear as separate volumes on your computer even though they are all on one physical disk. You can mount and unmount any partitions besides the startup disk. Why Partition a Disk? Partitioning a disk can make it easier to organize files, such as video and photo libraries, especially if you have a large hard drive. Creating a separate partition for your system files (the startup disk) can also help protect system data from corruption since each partition has its own file system. For many years it was advisable to partition hard drives to reduce the minimum sector size and increase performance. However, with modern file systems and faster HDDs and SSDs, creating multiple partitions no longer has the same benefit. Older operating systems only allowed you to partition a disk during the formatting or reformatting process. This meant you would have to reformat a hard drive (and erase all of your data) to change the partition scheme. Modern operating systems and disk utilities now allow you to resize partitions and create new partitions or volumes on-the-fly. For example, Apple's APFS file system supports resizing partitions and creating new volumes without reformatting. Updated June 11, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/partition Copy

PascalCase

PascalCase PascalCase is a naming convention in which the first letter of each word in a compound word is capitalized. Software developers often use PascalCase when writing source code to name functions, classes, and other objects. PascalCase is similar to camelCase, except the first letter in PascalCase is always capitalized. Below are some examples. PascalCase: NewObject; camelCase: newObject; PascalCase: LongFunctionName() camelCase: longFunctionName() Both PascalCase and CamelCase help developers distinguish words within names. For example, "LongFunctionName" is more readable than "longfunctionname." Font cases are rarely required in computer programming. Instead, they are mostly conventions used by programmers. For example, one developer may prefer to name variables with PascalCase, while another might use camelCase. While the term "PascalCase" comes from software development, it may describe any compound word in which the first letter of each word is capitalized. Examples include the company "MasterCard," the video game "StarCraft," and of course, the website "TechTerms." NOTE: PascalCase is identical to "UpperCamelCase," but most developers avoid this term to avoid confusion with camelCase. "PascalCase" may also be written "Pascal Case" (two words). Updated October 7, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pascalcase Copy

Passcode

Passcode A passcode is a numeric sequence used to authenticate a user on a computer or electronic device. The word "passcode" is sometimes used synonymously with "password," but technically, a passcode only contains numbers. The security of a numeric passcode is proportional to how many digits it contains. Since there are ten possible values for each digit (0-9), the number of possible codes is equal to ten raised to the number of digits. For example, a four-digit passcode can have 104 or 10,000 different combinations. A six-digit passcode can have 106 or 1,000,000 different combinations. Passcodes provide quick and easy authentication on devices with a numeric keypad. Smartphones, for instance, allow you to enter a passcode as an alternative to fingerprint or face recognition. Other devices that use passcode authentication include ATMs, electronic safes, and security system control panels. When accessing an ATM, your PIN (personal identification number) may double as your passcode. Since passcodes only contain integers, they are naturally less secure than passwords or passphrases. Therefore, devices that provide passcode authentication often disallow access after a certain number of failed login attempts. For example, if you passcode-protect your smartphone and enter the wrong passcode multiple times, you may be required to wait several minutes before you can attempt to log in again. Updated April 22, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/passcode Copy

Passive-Matrix

Passive-Matrix Passive-matrix is an LCD technology that uses a grid of vertical and horizontal wires to display an image on the screen. Each pixel is controlled by an intersection of two wires in the grid. By altering the electrical charge at a given intersection, the color and brightness of the corresponding pixel can be changed. While passive-matrix displays are relatively simple and inexpensive to produce, they have a few drawbacks. Since the charge of two wires (both vertical and horizontal) must be altered in order to change a single pixel, the response time of passive-matrix displays is relatively slow. This means fast movement on a passive-matrix display may appear blurry or faded, since the electrical charges cannot keep up with the motion. On some passive-matrix displays, you may experience "ghosting" if you move the cursor quickly across the screen. Since passive-matrix monitors do not display fast motion well, most modern flat screen displays use active-matrix technology. Instead of managing pixels through intersections of wires, active-matrix displays control each pixel using individual capacitors. This allows pixels to change brightness and color states much more rapidly. While most of today's computer monitors and flat screen televisions have active-matrix screens, passive-matrix displays are still used in some smaller devices, since they are less expensive to produce. Updated March 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/passive-matrix Copy

Passkey

Passkey Passkeys are an authentication technology that logs a user into a website without using a username and password. Instead, passkeys log a person in using their already-authenticated personal device and a pair of cryptographic keys. Since passkeys use the same fingerprint, facial scan, or PIN the user unlocks their device with, they make logging into a website faster, easier, and more secure. When someone first creates a passkey for an account, their device generates two cryptographic keys — one private key stored on their device, and one public key stored on the server of the website or service for the account. When they attempt to sign in, the server and the user's device do not exchange keys; instead, the server sends a mathematical challenge to the user's device, which solves it by combining an algorithm and the private key. The device sends that solution back to the server, which validates it using the public key. Neither the server nor the user's device ever possesses both keys. The technology behind passkeys is the same technology that enables the use of hardware authentication tokens in multi-factor authentication services. Instead of using a separate hardware authentication token, the user can simply use their existing device and the unlocking method they have set. By removing usernames and passwords from the process, passkeys can prevent several types of security problems. First and foremost, there are no usernames and passwords to steal — no weak passwords to guess, and no credentials vulnerable to a phishing scam. If a website's server is compromised, it only knows the public key. If a user's device is stolen, a passkey login still requires a biometric scan or PIN entry before granting access. NOTE: Using a passkey to log in to a website or service requires the support of the device's operating system and web browser. The passkey standard is a collaboration between Apple, Google, Microsoft, the FIDO alliance, and the WWW Consortium. This group is working together to include support for passkeys in their software and hardware devices. Updated November 23, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/passkey Copy

Passphrase

Passphrase A passphrase is a string containing multiple words that is used to authenticate a user on a computer system. It may be used in combination with a username to create a login or may be required separately for additional authentication. The words password and passphrase are sometimes used synonymously, but they have different meanings. By definition, a passphrase must contain a phrase that includes multiple words, while a password may only have a minimum length of six or eight characters. There is no universal required length for a passphrase, but a typical passphrase is 20 to 30 characters in length. Since passphrase are usually longer than passwords, they are generally more secure. However, they can be less secure if common phrases are used within the passphrase. Examples of insecure phrases include song lyrics, common sayings, and well-known quotes. Strong passphrases include strings that are not dictionary words and contain special characters, such as ampersands, commas, and periods. A good passphrase should be unique but be easy to remember. NOTE: As long as a system does not limit the length of your password, you can use a passphrase instead of a password. Updated October 9, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/passphrase Copy

Passthrough

Passthrough Passthrough is an option available on various electronic devices. It allows a signal to "pass through" the device unaltered. Different types of passthrough include HDMI, USB, and network passthrough. HDMI Passthrough A digital receiver (or AVR), processes input, such as digital audio, received through one or more HDMI ports. It may alter the audio signal using an equalizer or other effects before sending the output to the speakers or another device. In some cases, it is preferable to leave the signal untouched so it can be processed in its original state by another device. The "passthrough" option — typically located in the AVR's audio menu — prevents the input signal from being processed. HDMI "standby passthrough" is a similar but different option available on some AVRs. It allows digital data to flow through a specific HDMI port even when the device is in standby mode (plugged in, but not turned on). USB Passthrough USB passthrough allows USB peripherals to be daisy-chained through one or more devices. A common example is a keyboard with a USB port. You can plug a mouse into the USB port on the keyboard, which in turn connects to the USB port on the back of the computer. USB hubs provide the same function, often for multiple devices. USB passthrough passes digital data through a device, but it does not always provide USB power. If a USB device requires electrical power to operate (or charge), the passthrough device must supply the necessary wattage. Keyboards with powered USB passthrough typically use two USB cables. USB hubs that provide USB power may include an AC adaptor. Network Passthrough In networking, passthrough may refer to any device that relays data unaltered to another device on the network. However, it may also refer specifically to a modem or router that passes the IP address to a connected device. IP passthrough is a setting that turns off the device's routing features and lets the data pass through to the next connected device. It is similar to "bridge mode," but also removes any firewall features or other data processing. IP passthrough eliminates the need for network address translation (NAT) and assigns the public IP address to the connected device. Updated February 13, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/passthrough Copy

Password

Password A password is a string of characters that forms part of user authentication on a computer system, along with a username. The combination of a username and password is known as a login. The username is the publicly-known half of a login that identifies the account holder, and the password is the private half that should be kept secret. A password consists of a string of plain text characters — letters, numbers, and punctuation symbols (sometimes including spaces). When choosing a password, it is important to make it complicated enough so that other people cannot guess it — no birthdays or names of loved ones (which can be learned through social engineering) or plain words (which can be cracked using a brute force dictionary attack). A good, secure password is at least 12 characters long and includes a mix of lower- and upper-case letters, numbers, and punctuation symbols. Password fields obscure their contents by replacing every character with a • or * symbol You should also never use the same password for multiple websites and services. After successfully breaching one website, hackers will often test stolen username and password combinations on other websites to see what they can access. For example, if you use the same password for an online store and your Gmail account, and hackers attack the store to learn customer email addresses and passwords, those hackers can use those same credentials to access your email account. Many people use password managers to keep a database of their passwords. By storing username and password combinations in a password manager, secured and encrypted behind a master password, you only need to commit the master password to memory. You can look up your passwords in the database to access them when logging into a website, or even use a web browser extension that automatically fills in your username and password on websites you have an account with. Password managers also include password generators that create secure passwords and automatically store them in the database. Updated August 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/password Copy

Password Manager

Password Manager A password manager is a software program that generates strong passwords, stores them in an encrypted database, and retrieves them when needed. If a password manager's browser extension is installed to a web browser, it can automatically fill in usernames and passwords on websites. A password manager's database is encrypted and protected by a master password, although some password managers also use biometrics or hardware-based tokens to access passwords. Password managers, when used properly, can help keep personal data secure in several ways. By generating a strong password, often a random string of letters, numbers, and special characters, it's far less likely that a password could be guessed. Using a unique password for every account prevents a password leak of one website from revealing your password to other websites. Keeping passwords in an encrypted database keeps them secure, as long as anyone trying to access the database does not also have your master password. Strong passwords can take different forms. vNf$6zWSmubH is one example of a password generated to be 12 characters long, using letters, numbers, and special characters. fovraq-8jenga-nEbryw is formatted differently, and is meant to be easier to type in situations where a password cannot be copied and pasted into a field. Since these passwords are stored in the password manager, they don't need to be simple enough for anyone to memorize. Some password managers keep your secure password database locally on your device, while others keep it in cloud storage. No matter where it is stored, the database is encrypted by your master password. Without your master password, your password database will be inaccessible. Make sure to choose one that is long and complex enough to not be guessed by anyone else, but is still memorable to you. NOTE: Some web browsers include built-in password managers. These can automatically sign you into websites, but only when you're using that browser. Using a separate password manager lets you access your passwords even if you switch to another browser; if you're using a cloud-based password manager, you can even access your passwords from any device with it installed. Updated September 23, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/password_manager Copy

Paste

Paste Paste is a command that allows you to insert data from the clipboard into an application. In order to use the Paste command, you must first use either the Copy or Cut command to save data to the clipboard. Once the clipboard contains data, you can paste the saved data into any supporting program. The Paste command is most commonly used to copy text from one area to another. For example, you can copy a paragraph from a text document and paste it into an email message. You can also copy a URL from an email message and paste it into the address bar of a web browser. When pasting formatted text, some programs provide a "Paste Special" command that allows you to select what formatting, if any, to keep when pasting the text. Other programs include a "Paste and Match Style" command, which matches the formatting of the surrounding text. Paste can also be used to create copies of images, video clips, audio tracks, and other data. For example, an image-editing program might allow you to paste a digital photo into the canvas. An audio production program may allow you to copy four measures of an audio track, then paste it several times to create a loop. While the Paste command supports many types of data, it will only work if the application supports the data saved to the clipboard. For example, you cannot paste image data into an audio production program or audio data into an image-editing program. The Paste command is typically found in the program's Edit menu. You can paste data by either selecting Edit → Paste or by using the keyboard shortcut "Control+V" (Windows) or "Command+V" (Mac). The data will be inserted wherever the cursor is located in the document. If you have selected part of the document, the Paste command will typically replace the selected data with the clipboard contents. NOTE: The Paste command can only be used to paste data into an editable document. Therefore, if you try to paste data into a read-only document like a webpage or an email message in your inbox, the Paste command will not work. Updated November 14, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/paste Copy

Patch

Patch A patch is a software update issued by a program's developer to fix problems users and testers find in a program. A patch typically fixes bugs and security vulnerabilities, although some patches may add new features or modify existing features. Most programs receive multiple patches and other updates over their lifetimes. Software developers create and publish patches for their programs for several reasons. The most important reason is to fix bugs and security vulnerabilities discovered after the program was released. While beta testers try to identify as many bugs as they can, it's common for some to go unnoticed until the program is in the hands of its end users. Developers may also use patches to improve their software's performance by making it run more efficiently. They may use a patch to add new features or change how certain features work based on user feedback. Finally, some apps require a patch to maintain compatibility with an updated operating system. Like many other applications, Mozilla Firefox automatically checks for and downloads patches A developer can release a patch in several ways. Apps distributed through an app store will receive updates automatically through the store itself, often without any user involvement. Some applications include a built-in update utility that regularly contacts the developer's update server, notifying you when a patch is available. Other developers release patches on their app's website for you to download and install on your own. Patch notes provided by the developer tell you what's changed, whether it's only bug fixes or if it adds new features. Regardless of how a patch is delivered, it's wise to check occasionally to keep your software up-to-date. NOTE: The term "hotfix" is often used synonymously with "patch," although the two have slightly different connotations. A hotfix implies that it fixes a single critical bug or security vulnerability, and is released as fast as possible to protect users from a zero-day exploit. A patch, however, often includes fixes for multiple bugs and is released as part of a set update schedule. Updated September 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/patch Copy

Path

Path In the real world, paths are trails or streets than lead to a certain location. Similarly, in the computer world, a path defines the location of a file or folder in a computer's file system. Paths are also called "directory paths" because they often include one or more directories that describe the path to the file or folder. A path can either be relative or absolute. A relative path defines a location that is relative to the current directory or folder. For example, the relative path to a file named "document.txt" located in the current directory is simply the filename, or "document.txt". If the file was stored inside a folder named "docs" within the directory, the relative path would be "docs/document.txt." If the file was located one folder up from the current directory, the relative path would be defined as "../document.txt". Absolute paths are defined from the root directory of the file system. This means no matter what folder is currently open, the absolute path to any given file is the same. Some examples of root paths include "/" for a Unix root directory, "/Applications/Preview.app" for a Mac OS X application, and "C:Documents and SettingsAll UsersApplicaton Data" for the Windows application settings. As you can see, Macintosh and Unix systems use forward slashes ( / ) to identify path directories, while Windows uses backslashes ( ). Regardless of the syntax, paths are a simple way to describe the location of folders and files. Updated February 5, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/path Copy

Payload

Payload The term "payload" in computing terms can mean several different things. 1) In computer networking, a payload is the part of a data packet containing the transmitted data. 2) In computer security, a payload is the part of a computer virus or other malware containing the code that carries out the virus's harmful activity. 1. Data Packet Payload A payload is the part of a protocol data unit (PDU) that contains the transmitted data or message. When one device sends data over a network, it needs to combine that data with a header into a packet. A packet's header contains directions from its origin to its destination and instructions on how to reassemble the payload. Once the destination device receives the packet, it discards the header and then reads the data in the payload—much like receiving and opening a letter, discarding the envelope, then reading the message. An Ethernet frame contains a payload preceded by a header and followed by a frame check sequence (FCS) How much information a payload can contain varies between protocols. Any piece of data exceeding the payload limit gets split into multiple parts that get reassembled once received. For example, the maximum size of an IP data packet is 64 KB, of which only 20 to 60 bytes are the header, leaving the rest of it to the payload. An Ethernet frame, meanwhile, is only 1500 bytes of payload with 18 bytes of overhead. When an IP data packet travels over Ethernet, it is split—header and all—into smaller pieces that serve as the payloads in a series of Ethernet frames. 2. Malware Payload A payload is the part of a computer worm or virus that executes the code that conducts malicious activity. Some viruses search for and steal information, monitor activity, delete files, or encrypt files to hold them hostage. The other parts of a virus are the vector, which is the method the virus uses to infect the computer, and the trigger, which is the condition that activates the payload. Updated October 13, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/payload Copy

PBX

PBX Stands for "Private Branch Exchange." A PBX is a local telephone system designed for a business or organization. It allows a large number of users to share only a few external phone numbers. A private branch exchange is an internal telephony solution that also enables users to communicate externally through outside "POTS" phone lines. Instead of requiring each person to have a phone number, users are assigned extensions that serve as unique identifiers within the PBX. External callers can dial a single phone number and reach different people by entering their extension. By allowing numerous users to share a single phone number, a PBX significantly reduces the number of outside lines a company requires. Therefore, it is a cost-effective telecommunications solution for businesses. Traditional PBX A traditional PBX phone system is comprised of a local control unit and individual telephones. The control unit (or "base server") handles internal calls between users as well as external communications. It includes a storage device that logs voicemails and call histories. Early PBX systems were completely analog, meaning all communications took place through analog phone lines. More recent "traditional" PBX systems have digital lines and may use Ethernet cables rather than telephone cables for internal communication lines. A process called SIP trunking converts digital signals to analog if necessary when calling outside lines. Hosted PBX A hosted PBX is a cloud-based solution that does not require a local control unit. Instead, each user has a VoIP phone that connects directly to a router or switch. The PBX administrator can add, remove, and manage users with a software interface, typically via a web browser. Since a hosted PBX is cloud-based, users do not need to reside in the same physical location. Therefore, a hosted PBX is ideal for businesses with remote workers. Updated January 29, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pbx Copy

PC

PC Stands for "Personal computer." PCs are what most of us use on a daily basis for work or personal use. A typical PC includes a system unit, monitor, keyboard, and mouse. Most PCs today also have a network or Internet connection, as well as ports for connecting peripheral devices, such as digital cameras, printers, scanners, speakers, external hard drives, and other components. Personal computers allow us to write papers, create spreadsheets, track our finances, play games, and do many other things. If a PC is connected to the Internet, it can be used to browse the Web, check e-mail, communicate with friends via instant messaging programs, and download files. PCs have become such an integral part of our lives that it can be difficult to imagine life without them! While PC stands for "personal computer," the term can be a bit ambiguous. This is because Macintosh computers are often contrasted with PCs, even though Macs are also technically PCs. However, Apple itself has used the term "PC" to refer to Windows-based machines, as opposed to its own computers, which are called "Macs." While the Mac/PC dilemma remains, PCs can always contrasted with other types of computers, such as mainframes and server computers, such as Web servers and network file servers. In other words, if you use a computer at home or at work, you can safely call it a PC. Updated October 5, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pc Copy

PCAP

PCAP Stands for "Packet Capture." PCAP is an API for capturing computer network data packets. It records and saves traffic data logs in a standardized format, readable by any traffic monitoring and analysis tool. Network administrators use packet-capturing tools to obtain detailed information on data packets traveling over a network without disrupting that traffic. Packet capture tools, also called packet analyzers or packet sniffers, use the PCAP API to intercept and log data packets that travel over a computer network. Network administrators may choose to capture the entire data packet or just the information contained in the packet's header. After capturing the data, the packet capture software can export the information to a log file and display it in a human-readable format. This information may include the time that the data packet was intercepted, its source and destination IP addresses, the data protocol used, and the packet's size. Some tools may also capture the data packet's payload itself, but can only analyze its contents if that data is not encrypted. Network administrators use packet capture tools to monitor unusual activity and identify the sources of network problems. For example, captured packets can help detect a computer on the network with a malware infection by revealing a large number of data packets from that computer's IP address to a single remote server. NOTE: Packet capture tools save logs using the .PCAP or .CAP file extensions. Any software that supports the PCAP API for packet capture can open and display these files. Updated March 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pcap Copy

PCB

PCB Stands for "Printed Circuit Board." A PCB is a thin board made of fiberglass, composite epoxy, or other laminate material. Conductive pathways are etched or "printed" onto board, connecting different components on the PCB, such as transistors, resistors, and integrated circuits. PCBs are used in both desktop and laptop computers. They serve as the foundation for many internal computer components, such as video cards, controller cards, network interface cards, and expansion cards. These components all connect to the motherboard, which is also a printed circuit board. While PCBs are often associated with computers, they are used in many other electronic devices besides PCs. Most TVs, radios, digital cameras, cellphones, and tablets include one or more printed circuit boards. While the PCBs found in mobile devices look similar to those found in desktop computers and large electronics, they are typically thinner and contain finer circuitry. NOTE: PCB may also stand for "Process Control Block," a data structure in a system kernel that stores information about a process. In order for a process to run, the operating system must first register information about the process in the PCB. Updated August 2, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pcb Copy

PCI

PCI Stands for "Peripheral Component Interconnect." PCI is a hardware bus used for adding internal components to a desktop computer. For example, a PCI card can be inserted into a PCI slot on a motherboard, providing additional I/O ports on the back of a computer. The PCI architecture, also known as "conventional PCI," was designed by Intel and introduced in 1992. Many desktop PCs from the early 1990s to the mid-2000s had room for two to five PCI cards. Each card required an open slot on the motherboard and a removable panel on the back of the system unit. Adding PCI cards was an easy way to upgrade a computer since you could add a better video card, faster wired or wireless networking, or add new ports, like USB 2.0. The original 32-bit, 33 MHz PCI standard supported data transfer rates of 133 megabytes per second. An upgraded 64-bit, 66 MHz standard was created a few years later and allowed for much faster data transfer rates up to 533 MHz. In 1998, IBM, HP, and Compaq introduced PCI-X (or "PCI eXtended"), which was backward compatible with PCI. The 133 MHz PCI-X interface supported data transfer rates up to 1064 MHz. Both PCI and PCI-X were superseded by PCI Express, which was introduced in 2004. Updated June 25, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pci Copy

PCI Express

PCI Express Stands for "Peripheral Component Interconnect Express," and is abbreviated "PCIe" or "PCI-e." PCI Express is the standard expansion bus on modern computer motherboards. It replaced several previous standards, including PCI and AGP. Most computer motherboards include a set of PCIe expansion slots allowing users to install expansion cards to add new and improved computer capabilities. PCIe slots are available in four sizes — x1, x4, x8, and x16 — based on how many individual PCIe lanes run to and from that slot. A PCIe x1 slot is the smallest and slowest, using a single PCIe lane; a PCIe x4 slot uses four PCIe lanes at once and thus has four times the total bandwidth. Each size slot is longer than the previous size and can accept cards designed for the smaller types. For example, an x1 card can fit into any size PCIe slot, while an x8 card can fit into an x8 or x16 slot but not an x1 or x4. Updated versions of the PCIe bus are introduced every few years, with each new PCIe version capable of data transfer rates roughly twice as fast as its predecessor. For example, PCIe version 1.0 transfers data at 250 MB/s per lane, version 2.0 at 500 MB/s per lane, and 3.0 at 1 GB/s per lane. The slot size and arrangement do not change between generations, and any PCIe card works in a slot that it fits into at the slower version's speed. Therefore, a PCI 4.0 x16 graphics card works in a PCI 3.0 x16 slot but at half of its designed bandwidth. As of 2023, the most recent PCIe version is 6.0, which transfers data at 7.5 GB/s per lane (up to 121 GB/s at x16). NOTE: Other interfaces can use the PCIe bus to transfer data, like the M.2 slot used by NVMe SSD drives. Updated July 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pci_express Copy

PCI-X

PCI-X Stands for "Peripheral Component Interconnect eXtended." PCI-X is a local hardware bus for adding internal expansion cards to a computer. It extends the original PCI bus and allows users to install expansion cards with higher bandwidth requirements, like gigabit networking cards and SCSI storage controllers. Unlike conventional PCI slots, PCI-X slots are typically only found on server and workstation computers. Conventional PCI slots can transfer data at a clock speed of either 33 or 66 MHz, resulting in a maximum data transfer rate of 133 megabytes/second. While this was good enough for most components in the early 1990s, higher-end components made later in the decade needed a faster interface. The first revision of PCI-X doubled the bandwidth of PCI, transferring data at 133 MHz. A second revision, PCI-X 2.0, supported double data rate (266 MHz) and quad data rate (533 MHz) transfers, supporting a theoretical maximum data transfer rate of 4,264 megabytes/second. An Emulex Fibre Channel card using a PCI-X interface PCI-X slots on a motherboard are backward compatible with conventional PCI cards. A PCI-X slot is longer than a 32-bit PCI slot, but is the same size and pin layout as the uncommon 64-bit PCI slot. A PCI card can fit into a PCI-X slot and function normally; some PCI-X cards with the right pin notches can fit into a 32-bit PCI slot (with part of the edge connector hanging out over the end of the slot) and still work at significantly reduced bandwidth. The PCI-X bus first arrived in 1999, and the updated PCI-X 2.0 specification followed in 2003. Neither version saw significant adoption outside of the server market. PCI Express, also introduced in 2003, was an entirely different design that was faster and more efficient at transferring data. By the end of the 2000s, PCI Express was the widespread expansion bus standard, and PCI-X was limited to server motherboards designed to support legacy hardware. Updated December 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pcix Copy

PCM

PCM Stands for "Pulse-Code Modulation." Pulse-code modulation is a process that converts analog audio into digital audio. It samples the amplitude of an analog audio signal at regular intervals, then rounds each sample to the nearest pre-set digital amplitude level. It is the standard method for converting analog audio into digital audio for CDs, computers, and digital voice communication. There are two variables involved when PCM sampling an audio signal. The first is the sample rate, measured in kilohertz (kHz), which is the number of times per second the signal is analyzed. The second is the bit depth, which sets how many possible digital values can represent that sample. When a sample is analyzed, a process called quantization converts the amplitude to the nearest digital value. A higher bit depth provides a wider range of digital values, and the closer the quantized value can get to the actual value. For example, audio on a CD is sampled at 44.1 kHz and has a bit depth of 16 bits per sample. A bit depth of 16 bits provides 65,536 possible amplitude values. An audio signal (black line) is sampled and quantized, with each sample assigned a digital value (red line) to create a digital approximation of the audio signal PCM was first widely used to digitize voice telephone communications to make it easier to transmit over long distances without any signal loss. The Digital Signal 0 (DS0) specification, originally set back in the 1960s and still used today, digitizes a phone call at 8 kHz at 8 bits per sample for a bitrate of 64 kbps. NOTE: Uncompressed PCM audio is stored in several different file formats, including .PCM, .WAV, and .AIFF. Updated October 26, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pcm Copy

PCMCIA

PCMCIA Stands for "Personal Computer Memory Card International Association." PCMCIA was an organization that focused on creating expansion card standards for portable computers. It began in 1989 and lasted until 2010 when it was acquired by the USB Implementers Forum (USB-IF). The most notable product developed by the Personal Computer Memory Card International Association is the PCMCIA card (commonly called a "PC card"), which provided expansion capabilities for laptops. The card could be inserted into a PCMCIA slot on the side of a laptop, providing additional memory or connectivity. There were three versions of the PCMCIA card standard: Type I - 3.3 mm thick - used for memory expansion Type II - 5.0 mm thick - most common; used for NICs (Ethernet cards), modems, and sound cards Type III - 3.3 mm thick - used for ATA hard drives Larger PCMCIA slots were backward compatible with smaller cards. For example, a Type III slot could support Type 1, 2, and 3 cards, and a Type II slot could support Type 1 and 2 cards. In the 1990s, PCMCIA cards were a common means of adding extra functionality to laptops. But as laptop components became smaller, manufacturers were able to fit all the necessary components into their laptops, making PCMCIA cards unnecessary. Additionally, many peripherals that previously required a PCMCIA card became available in USB versions. In the early 2000s, the trend toward thinner and lighter laptops eventually made PCMCIA cards obsolete. Updated August 9, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pcmcia Copy

PDA

PDA Stands for "Personal Digital Assistant." A PDA is a small, handheld electronic computer for keeping and organizing personal information. Someone could use a PDA to keep their appointment calendar, write notes, manage an address book, and keep other personal data handy wherever they went. A PDA synced with its user's computer using a serial, USB, or Bluetooth connection to add files and keep data up-to-date. Several PDA platforms (primarily PalmOS and Windows Mobile) were common in the late 1990s and early 2000s, but they declined and disappeared after the introduction of smartphones running iOS and Android. PDAs used low-resolution resistive touchscreens that required you to tap with a stylus to interact with it. You would tap on an app icon to launch that app, tap buttons to trigger commands, and even tap tiny keys on an on-screen virtual keyboard to enter text. Most PDAs also supported some form of handwriting recognition for text input; for example, PalmOS devices used an alphabet called Graffiti that allowed users to write text, letter by letter, using simplified glyphs. The primary use for a PDA was as a digital organizer to replace a paper address book, calendar, notepad, and to-do list. Each PDA platform included a basic suite of built-in apps for most tasks and allowed you to install third-party software to add new features (and to play some games). Later models of PDA included Wi-Fi connectivity to access the Internet, although most were limited to email, instant messaging, and basic web browsing. The first mainstream PDA was the Apple Newton, released in 1993. It failed to catch on since the handwriting recognition at release was unreliable, and Apple abandoned the platform in 1998. The 3Com Palm Pilot, released in 1996, was the first PDA to achieve widespread success thanks to its simplicity and easy-to-learn Graffiti writing system. After Palm devices proved PDAs could be successful, Microsoft released the Windows Mobile operating system for Pocket PC devices like the HP iPAQ and Toshiba e800. Finally, the original BlackBerry PDA devices set themselves apart by using small physical keyboards and could connect to a cellular phone network to send and receive email messages. Updated December 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pda Copy

PDF

PDF Stands for "Portable Document Format." PDF is a file format designed to present documents consistently across multiple devices and platforms. It was developed by Adobe in 1992 and has since become one of the most widely used formats for saving and exchanging documents. A PDF file can store a wide variety of data, including formatted text, vector graphics, and raster images. It also contains page layout information, which defines the location of each item on the page, as well as the size and shape of the pages in the document. This information is all saved in a standard format, so the document looks the same, no matter what device or program is used to open it. For example, if you save a PDF on a Mac, it will appear the same way in Windows, Android, and iOS. The PDF format also supports metadata, such as the document title, author, subject, and keywords. It can store embedded fonts so you do not need to have the appropriate fonts installed to view the document correctly. PDF documents may also be encrypted so only authorized users can open them. Creating and Viewing PDFs PDFs are rarely created from scratch. Instead, they are usually generated from an existing document. For example, you might save a Word document as a PDF or scan a hard copy and save it as a PDF. While the PDF format was originally proprietary, Adobe has opened the format to other developers, so many programs now include a "Save as PDF" or "Export to PDF" option. macOS provides the "Save as PDF" feature in the standard Print dialog box, so you can save any printable document as a PDF. To view a PDF, you can use Adobe Reader or any program or plug-in that supports the PDF format. You can edit PDFs using Adobe Acrobat or a third party PDF editor. For example, many editors include a "Fill & Sign" feature, which allows you to fill out fields and sign the document. Programs that support OCR allow you to digitally scan the document for text and then edit or delete it. You can also add images and blocks of text to the PDF. Most PDF editors also allow you to merge multiple PDFs into a single document. NOTE: Since the Portable Document Format is designed to be an exchange format, PDF editing options are limited compared to other formats. Therefore, when designing a document, it is best to create it with an editor such as Microsoft Word, CorelDRAW, or Adobe InDesign, then save the document as a PDF. Updated April 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pdf Copy

PDU

PDU Stands for "Protocol Data Unit." A PDU is a specific block of information transferred over a network. It is often used in reference to the OSI model, since it describes the different types of data that are transferred from each layer. The PDU for each layer of the OSI model is listed below. Physical layer – raw bits (1s or 0s) transmitted physically via the hardware Data Link layer – a frame (or series of bits) Network layer – a packet that contains the source and destination address Transport layer – a segment that includes a TCP header and datra Session layer – the data passed to the network connection Presentation layer – the data formatted for presentation Application layer – the data received or transmitted by a software application As you can see, the protocol data unit changes between the seven different layers. The resulting information that is transferred from the application layer to the physical layer (and vice versa) is not altered, but the data undergoes a transformation in the process. The PDU defines the state of the data as it moves from one layer to the next. NOTE: PDU also stands for "Power Distribution Unit." A typical power distribution unit looks like a power strip with multiple outlets, but includes electrical components that ensure equal voltage is distributed to each outlet. They are commonly used in data centers to provide consistent power to connected servers. These type of PDUs are often rack mountable, meaning they can be placed in a 1U rack space like a server. Updated September 27, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pdu Copy

Pebibyte

Pebibyte A pebibyte (PiB) is a unit of data storage equal to 250 bytes, or 1,125,899,906,842,624 bytes. Like kibibyte (KiB) and mebibyte (MiB), pebibyte has a binary prefix to remove any ambiguity when compared to the multiple possible definitions of a petabyte. A pebibyte is slightly larger than a petabyte (PB), which is 1015 bytes (1,000,000,000,000,000 bytes); a pebibyte is roughly 1.1259 petabytes. A pebibyte is 1,024 tebibytes. 1,024 pebibytes make up a exbibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a petabyte could mean different numbers. When computer engineers first began using the term kilobyte to refer to a binary measurement of 210 bytes (1,024 bytes), the difference between binary and decimal measurements was roughly 2%. As file sizes and storage capacities expanded, so did the difference between the two types of measurements — a pebibyte is more than 11% larger than a petabyte. Using pebibyte to refer specifically to binary measurements helps to address the confusion. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 1, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pebibyte Copy

Peopleware

Peopleware Computers operate using a combination of hardware and software. However, without user interaction, most computers would be useless machines. Therefore, "peopleware" is sometimes considered a third aspect that takes into account the importance of humans in the computing process. Peopleware is less tangible than hardware or software, since it can refer to many different things. Examples of peopleware include individual people, groups of people, project teams, businesses, developers, and end users. While peopleware can mean many different things, it always refers to the people who develop or use computer systems. While the term may not be as widely used as "hardware" or "software," peopleware is an important part of computer technology. It is a good reminder that the role of people should not be overlooked by any business or organization. Updated August 10, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/peopleware Copy

Peripheral

Peripheral A peripheral is any external device that connects to a computer for input or output. A peripheral connects either physically or wirelessly, and it is controlled by the computer when connected. Peripherals may also be called "I/O devices" or simply "accessories." Peripherals can be classified by whether they provide input, receive output, or do both. For example, a keyboard and mouse are input devices that pass a user's commands onto the computer. A monitor is an output device that takes information from the computer and displays it on the screen. A multi-function printer provides both output (when printing) and input (when scanning). An external storage device like a flash drive also provides both input and output, in the form of files moving between it and the computer. Peripherals that physically connect to a computer do so using its hardware ports. USB is the most-used connection, but Thunderbolt, HDMI, Display Port, and audio ports are also ubiquitous. Bluetooth is the most common form of wireless connection, although some devices communicate by RF through a dongle that physically connects to a computer. Some peripherals require a software driver for the computer to interact with it, but most are plug-and-play. Updated October 4, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/peripheral Copy

Perl

Perl Perl is an open-source general-purpose programming language. It was originally designed for processing and manipulating large batches of text, but is now used for many purposes. It is most commonly used for writing Common Gateway Interface (CGI) scripts for web servers, analyzing log files, and performing system administration tasks. Perl borrows syntax from the C language, the Unix shell, and other common shell scripting languages. It is a flexible language built on the principle that "there's more than one way to do it." Its flexibility and common roots in other programming languages give Perl an easy learning curve. Perl does not limit programmers to a single paradigm, instead allowing developers to choose between procedural, object-oriented, or functional programming. It is also an interpreted language, meaning that the Perl interpreter can run scripts without compiling code. The designers of Perl focused on enabling programmers to write programs quickly and easily; the efficient use of computing resources was not a priority. Perl includes features that make Perl scripts more resource-intensive than performing the same task in a program written in C. Perl is also a forgiving language, tolerating exceptions to its rules; this can make finding bugs in the code difficult. NOTE: Perl is not an acronym, although various backronyms have been created for it like "Practical Extraction and Reporting Language." Updated November 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/perl Copy

Permalink

Permalink Short for "permanent link." A permalink is a URL that links to a specific news story or Web posting. Permalinks are most commonly used for blogs, which are frequently changed and updated. They give a specific Web address to each posting, allowing blog entries to be bookmarked by visitors or linked to from other websites. Because most blogs are published using dynamic, database-driven Web sites, they do not automatically have Web addresses associated with them. For example, a blog entry may exist on a user's home page, but the entry may not have its own Web page, ending in ".html," ".asp," ".php," etc. Therefore, once the posting is outdated and no longer present on the home page, there may be no way to access it. Using a permalink to define the location of each posting prevents blog entries from fading off into oblivion. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/permalink Copy

Permissions

Permissions Permissions (or "privileges") determine what actions a user account is allowed to perform on a computer system. It often refers to file system permissions, which specify which user accounts on a system have ownership of, and access to, specific files and folders. The term may also refer to the ability of a user account to perform specific tasks in a database or database-driven application. Multi-user operating systems like Windows, Unix, Linux, and macOS use file systems with built-in permissions systems. Each user has a user account, which includes several user folders they control. They may also be part of a group with other users with a similar role. Each file and folder on the file system uses an access control list (ACL) that specifies which accounts and groups can access and modify it. Changing a file's permissions in macOS In addition to standard user accounts, these systems also include special accounts with "root," "administrator," or "superuser" permissions that offer full control over all files and folders on that system. However, a program with root permissions could modify or damage system files or other programs. Instead, programs can run in special accounts, known as "system accounts" or "system user accounts," that offer limited permissions to restrict a program's access to only what it needs. Permitted Operations Each file and folder includes distinct permissions levels for its owner, other members of the owner's group, and all other users on the system. Permissions applied to a folder are inherited by its files and subfolders. Unix, Linux, and macOS permissions give accounts access to three operations: Read lets an account view the contents of files and folders. Write allows an account to create and modify files and subfolders. Execute lets the account run programs within a folder. The permissions system in Windows has a similar set of operations: Read lets an account view the contents of a file or folder. Write allows the account to create and modify files and subfolders. Read & Execute lets a user view files and run programs without modifying files. Modify allows them to create, modify or delete files and subfolders. Full Control lets an account do everything, including changing the permissions granted to other user accounts. Updated April 19, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/permissions Copy

Personal URL

Personal URL Many websites that host online communities allow you to create your own personal URL within the website. This custom URL typically serves as the Web address of your profile page and can be shared and bookmarked by other users. Examples of websites that allow you to choose a personal URL include social networking sites like Facebook, photo sharing sites like Flickr, and various Web forums. Users can choose a custom name, which will then become part of their personal URL. For example, a personal URL on Facebook may be http://www.facebook.com/username. The username might be a made up name, such as "pinkbunny77" or your actual name, such as "john.johnson." If you have an account on a website that allows you to select a personal URL, but have not chosen one yet, the Web address of your profile page may include a unique ID instead of a username. For example, the Web address of a Facebook profile page might be "http://www.facebook.com/profile.php?id=1234567890". Since a username is much easier to remember than a long string of numbers, selecting a username is usually recommended. When you choose your personal URL for Facebook or another website, make sure you decide on one you really like, since you may not be able to change it. Personal URLs are also called purls, personalized URLs, or custom Web addresses. They are different from personal home pages, which are created and published on a user's own website. Updated December 13, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/personal_url Copy

Petabyte

Petabyte A petabyte is 1015 or 1,000,000,000,000,000 bytes. One petabyte (abbreviated "PB") is equal to 1,000 terabytes and precedes the exabyte unit of measurement. A petabyte is slightly less in size than a pebibyte, which contains 1,125,899,906,842,624 (250) bytes. Since most storage devices can hold a maximum of a few terabytes, petabytes are rarely used to measure the storage capacity of a single device. Instead, petabytes are more commonly used to measure the total data stored in large computer networks or server farms. For example, Internet companies like Google and Facebook store over 100 petabytes of data on their servers. NOTE: You can view a list of all the units of measurement used for measuring data storage. Updated November 21, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/petabyte Copy

Petaflops

Petaflops Petaflops is a unit of measurement used for measuring the performance of a processor's floating point unit, or FPU. It may also be written "petaFLOPS" or "PFLOPS." Since FLOPS stands for "Floating Point Operations Per Second," the term "petaflops" may be either singular (one petaflops) or plural (two or more petaflops). One petaflops is equal to 1,000 teraflops, or 1,000,000,000,000,000 FLOPS. Petaflops are rarely used to measure a single computer's performance, since only the fastest supercomputers run at more than one petaflops. Therefore, petaflops are more often used when calculating the processing power of multiple computers. Also, since FLOPS only measures floating point calculations, petaflops is not necessarily a good indicator of a computer's overall performance. Other factors, such as the processor's clock speed, the system bus speed, and the amount of RAM may also affect how quickly a computer can perform calculations. Updated March 18, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/petaflops Copy

Pharming

Pharming Pharming is yet another way hackers attempt to manipulate users on the Internet. While phishing attempts to capture personal information by getting users to visit a fake website, pharming redirects users to false websites without them even knowing it. While a typical website uses a domain name for its address, its actual location is determined by an IP address. When a user types a domain name into his or her Web browser's address field and hits enter, the domain name is translated into an IP address via a DNS server. The Web browser then connects to the server at this IP address and loads the Web page data. After a user visits a certain website, the DNS entry for that site is often stored on the user's computer in a DNS cache. This way, the computer does not have to keep accessing a DNS server whenever the user visits the website. One way that pharming takes place is via an e-mail virus that "poisons" a user's local DNS cache. It does this by modifying the DNS entries, or host files. For example, instead of having the IP address 17.254.3.183 direct to www.apple.com, it may direct to another website determined by the hacker. Pharmers can also poison entire DNS servers, which means any user that uses the affected DNS server will be redirected to the wrong website. Fortunately, most DNS servers have security features to protect them against such attacks. Still, they are not necessarily immune, since hackers continue to find ways to gain access to them. While pharming is not as common as phishing scams are, it can affect many more people at once. This is especially true if a large DNS server is modified. So, if you visit a certain website and it appears to be significantly different than what you expected, you may be the victim of pharming. Restart your computer to reset your DNS entries, run an antivirus program, then try connecting to the website again. If the website still looks strange, contact your ISP and let them know their DNS server may have been pharmed. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pharming Copy

Phishing

Phishing Phishing is a social engineering attack that tries to trick victims into divulging personal, financial, or security information. A phishing attempt consists of an email message that looks to be from a company (like Microsoft, PayPal, or a banking institution) that asks the victim to update or verify personal information on the company's website. However, a link within the email message actually sends the victim to a fraudulent website that imitates the legitimate one. When the victim enters their username, password, credit card number, or other personal information into a form on the website, it sends that information to the scammers who run it. Scammers can use phishing attacks for several goals. They are often after the victim's financial information, like their credit card number or bank account information. They may also want to hijack a victim's account on a specific website. For example, by stealing a victim's email login, they can access their inbox to find more personal information or send messages imitating the victim to scam others. Scammers may also want the victim's username and password from one website to try the same combination elsewhere. Targeted phishing attacks, called spear phishing, attempt to trick specific employees into sending credentials that allow attackers access to a company's internal network. Identifying a Phishing Attempt While many phishing email messages look convincing, there are some common signs you can look for. You can check the message's From field to see the domain that the email was sent from. You can also examine hyperlinks to see where they send you — these links often resemble the company's actual URL but include a subtle misspelling or deceptive subdomain (for example, http://www.microsoft.login-info.com). Links may also point directly to an IP address instead of a domain name. The content of the email may contain misspellings and grammatical errors, and they are often written with a sense of urgency to get you to act quickly and without thinking. If you have any doubt whether an email is legitimate, don't follow links or enter personal information. Instead, you can enter the correct website URL directly and log in as you otherwise would to see whether the system asks you to update your information. Finally, you should not reuse a username and password combination on multiple websites. If you do fall victim to a phishing attack, this prevents the scammers from accessing any other accounts. Updated November 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/phishing Copy

PHP

PHP Stands for "PHP: Hypertext Preprocessor." PHP is a scripting language web developers use to create dynamic websites. It is often installed by default on Apache web servers, alongside MySQL as part of a "LAMP" configuration. Each PHP code snippet starts with . PHP pages may contain multiple snippets inserted throughout the HTML that generate dynamic content. When a website visitor accesses a PHP page, the web server processes, or "parses," the PHP code before sending the HTML to the client's browser. In the example below, the PHP function gets the local time and date from the server and inserts it into the HTML. ("F d, Y" formats the date as December 31, 2021.) The current date is Below is a behind-the-scenes look at some of the PHP code used to generate quiz questions on TechTerms.com. PHP scripts can range from simple one-line commands to complex functions. Some PHP-based websites generate nearly all webpage content dynamically using a series of PHP scripts. While early versions of PHP were not object-oriented, PHP3 introduced support for classes, including object attributes and methods. Developers can create custom object libraries and import them into various PHP pages, similar to a compiled language. Because PHP code is processed before the HTML is loaded, users cannot view the PHP code within a webpage. They can only see the HTML output by the PHP scripts. Much of PHP's syntax is similar to other languages such as C++, Java, and Perl. However, PHP contains several proprietary functions, including MySQL-specific methods for accessing records from a MySQL database. Because of its integration with MySQL, its consistent maintenance, and overall ease of use, PHP remains a popular choice for creating dynamic websites. File extension: .PHP Updated November 6, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/php Copy

Phreaking

Phreaking Phreaking is a slang term that describes the action of experimenting with or manipulating a telephone system. Since phreaking took place before personal computers became popular, it is sometimes considered to be the precursor to computer hacking. While not all phreaking activities are illegal, the term is often associated with using a phone system to make free long distance calls. Early phone systems has limited security features, which allowed "phreaks" to tap into analog phone lines and make calls free of charge. This was often done using a device called a "blue box," which simulated a telephone operator's console. Phreaks could use these devices to route their own calls and bypass the telephone company switches, allowing them to make free calls. This activity was more prominent before the turn of the century, when cell phone companies began including free long distance service. Phreaking has evolved over past several decades along with telecommunications technology. In the 1980s and 1990s, phreaks began using modems to access to computer systems over telephone lines. Once connected via modem, tech-savvy users could access private data or exploit computers connected on the local network. This activity also faded out around the turn of the century as dial-up modems were replaced by DSL and cable modems and new security measures were put into place. While phreaking still exists, it is much less common than other types of computer hacking. Updated April 30, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/phreaking Copy

Piconet

Piconet A piconet is a network that is created using a wireless Bluetooth connection. Some examples of piconets include 1) a cell phone and a computer, 2) a laptop and a Bluetooth-enabled digital camera, or 3) several PDAs that are connected to each other. Piconets can include anywhere from two to eight devices. One device serves as the master device, while the rest of the devices within the network are slave devices. The master device acts as the hub, meaning the slave devices must communicate through the master device in order to communicate with each other. In most piconets, the computer serves as the master device. The term "piconet" is derived from the words "pico," which means "very small" (technically, one trillionth), and "net," which is short for "network." Therefore, the word "piconet" literally means "very small network." Updated May 28, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/piconet Copy

PIM

PIM Stands for "Personal Information Manager." A Personal Information Manager is a category of software application that helps manage all sorts of personal information, from notes and office documents, to contacts, photos, or a personal journal. A PIM app can focus on a single kind of information, like a calendar or note-taking app, or it can be a comprehensive suite of tools that manages multiple types of information. Since many operating systems now include a few simple built-in PIM tools like calendars and email clients, many third party PIM apps focus on taking those features and making them more comprehensive and customizable. PIM software stores the information entered into it in a database. Notes, saved documents, appointments, and anything else in that database is easily accessed. Some PIM software stores that database in the cloud to make it accessible from multiple devices or a web browser. Since this personal information can be very private, some PIM databases can be encrypted to keep the data secure. PIM software was a major feature of Personal Digital Assistants like the Palm Pilot and the Apple Newton. Now that smartphones have replaced PDAs, mobile operating systems include many PIM features. In addition to a smartphone's built-in calendar, contacts, and note-taking apps, some popular PIM software titles include Evernote, Microsoft OneNote, and DEVONthink. Updated October 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pim Copy

Ping

Ping A ping is a signal sent to a host that requests a response. It serves two primary purposes: 1) to check if the host is available and 2) to measure how long the response takes. A ping request can be performed using a ping command, which is a standard command in most command line interfaces. Several network utilities provide a ping feature, which allows you to ping a server by simply entering the IP address or domain name. Most ping programs send multiple pings and provide an average of the pings at the end. To perform a ping command from a command line interface, simply type "ping" followed by the IP address or domain name of the host you want to ping (e.g., ping 123.123.123.123). The ping itself consists of a single packet (often 32 or 56 bytes) that contains an "echo" request. The host, if available, responds with a single packet as a reply. The ping time, measured in milliseconds, is the round trip time for the packet to reach the host and for the response to return to the sender. Ping response times are important because they add overhead to any requests made over the Internet. For example, when you visit a webpage the ping time is added to the time it takes a server to transmit the HTML and related resources to your computer. Pings are especially important in online gaming, where events happen in real-time. While Internet connection speeds can affect pings, ping response time is often directly related to the physical distance between the source and destination systems. Therefore, a fast connection between New York and Tokyo will likely have a longer ping than a slow connection between New York and Philadelphia. Network congestion may slow down pings, which is why pings are often used for troubleshooting. What is a Good Ping Response Time? < 30 ms - excellent ping; almost unnoticeable; ideal for online gaming 30 to 50 ms - average ping; still ok for online gaming 50 to 100 ms - somewhat slow ping time; not too noticeable for web browsing but may affect gaming 100 ms to 500 ms - slow ping; minimal effect on web browsing, but will create noticeable lag in online gaming > 500 ms - pings of a half second or more will add a noticeable delay to all requests; typically happens when the source and destination are in different parts of the world Updated August 15, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ping Copy

Pinterest

Pinterest Pinterest is a social networking website that allows you to organize and share ideas with others. You can share your own content as well as things that other Pinterest users have posted. Once you register for a free account, you can create your own "boards" to organize your content. Examples of topics include recipes, home decor, photography, quotes, and games. You can upload images and "pin" them to relevant boards. Other Pinterest users can browse your boards and comment on individual items. Likewise, you can browse other users' boards and "Like," "Repin," or comment on their pinned items. Similar to Twitter, Pinterest allows you to follow other users. If you find another user's content to be especially interesting, you can click "Follow All" to have all their boards show up in your account in real-time. If you only want to follow specific boards, you can click "Follow" next to each board you want to follow. Pinterest does not inform users when you choose to unfollow them. To experience Pintrest for yourself, visit the Pinterest website. Updated May 11, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pinterest Copy

Pipeline

Pipeline A pipeline in the context of computer processing is a technique for executing multiple instructions at once. Computer processors can handle millions of instructions per second. Every instruction goes through several stages, each processed by a different part of the processor. With each clock cycle, an instruction moves from one stage to the next, and while one instruction is going through one stage, other instructions are simultaneously going through different stages. A typical CPU pipeline involves five stages — instruction fetch, instruction decode, execute, memory access, and write back (although this may vary by architecture). Without pipelining, a processor could handle only one instruction at a time. Since it takes several clock cycles for each instruction to move through all the stages, most of the processor would sit idle while a single instruction moves from stage to stage. Non-pipelined processing does not make a single instruction take longer, but it does mean that the processor can process fewer instructions per second. Instructions moving through stages with each clock cycle. In the third clock cycle, the first instruction has moved to the Execution stage while the next instructions in the pipeline occupy the Instruction Decode and Instruction Fetch stages. Instruction pipelining works a lot like an automotive assembly line, where each station has a single task — installing seats, mounting the engine, attaching the wheels, etc. If only one car is moving down the line at a time, most of those stations are sitting idle, and the factory won't be able to make many cars in a day. However, if dozens of vehicles are on the line at once — enough to keep each station busy — far more cars will be completed in a single day even though each one still takes the same time. GPU Graphics Pipelines In addition to CPUs, computer GPUs also use pipelining to process instructions. The GPU graphics pipeline is more complex, involving several dedicated processing components that perform mathematical functions to render 3D scenes and objects into 2D images displayed on a screen. GPU architectures vary significantly from generation to generation, but most graphics pipelines follow a similar pattern. A simple graphics pipeline starts by converting the 3D coordinates of vertices (the corners of the triangles that make up every 3D object) into 2D coordinates on the flat plane of a screen. It turns those coordinates into wireframes, then fills in the geometry between those vertices to create shapes. Next, it rasterizes those shapes into pixels by adding texture maps, lighting, and other shader effects. Hidden pixels and other information not visible to the "camera" are discarded, then the final image is saved to the frame buffer and displayed on the screen. Updated December 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pipeline Copy

Piracy

Piracy When someone installs and uses commercial software without paying for the program, it is called "pirating" the software. This name comes from the traditional meaning of the word "pirate," which is a sea-faring criminal that steals and loots belongings from others. But far from the stereotypical sea pirate, a software pirate can be anyone who owns a computer. Software piracy is committed by simply downloading or copying a program that a user has not paid for. Since computer programs are stored in a digital format, they are easy to copy and reproduce. For example, a game may be burned to a CD and transferred to the computer of an individual who has not paid for the program. Software programs can also be illegally downloaded from the Internet from unauthorized sources. Since pirating software does not require many resources, it has grown into a major problem for the computer industry. While it may seem like an innocuous act, pirating software is the same as stealing. Software companies often invest thousands or even millions of dollars into creating the programs they sell. The income from selling these programs is what allows companies to produce the software and to continue improving the programs we use. Just because it is possible to copy a software program does not mean it is OK. Installing a commercial program from an illegal copy is the same thing as walking out of a store with the program and not paying for it. While there are some programs that are free to use (such as shareware and freeware programs), it is important to pay for commercial software. You can avoid software piracy by only downloading software from authorized sources and making sure that you have valid software licenses for all the programs you use. Remember that paying for software programs supports the software industry, which is good for all of us! Updated July 18, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/piracy Copy

Pixel

Pixel A pixel (short for "picture element") is the smallest element of a digital image displayed on screen — a single dot of color within a larger grid of dots that combine to form a picture. The word "pixel" can refer to both the physical element on a monitor capable of displaying a single dot of color and to the digital data representing each dot within a raster image file. The more pixels in an image or monitor, the higher its resolution and the more detail it can display. Computer monitors and other screens contain a matrix of thousands or even millions of pixels. An individual pixel displayed on a monitor is often too small to be seen by the naked eye, with some HiDPI monitors containing more than 200 pixels per inch. Each pixel on a screen consists of three smaller subpixels (one each for red, green, and blue) that mix light into a single color. Every raster (or bitmap) image consists of a grid of square pixels. Each pixel only displays a single color; combining thousands (or millions) of pixels allows the colors in those individual dots to blend smoothly. How many possible colors a pixel can display is determined by the image's color depth — the number of bits it uses to store a single pixel. A color depth of 8 bits allows for 256 (28) possible color values, while a higher color depth of 24 bits allows each pixel to have one of more than 16 million (224) possible colors. Updated February 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pixel Copy

Plagiarism

Plagiarism Plagiarism is the act of copying someone else's work and publishing it as your own. This includes text, media, and even ideas. Whenever another person's work is copied and republished without an appropriate reference or citation, it is considered plagiarism. Examples of plagiarism range from small infractions such as not putting quotes around a quotation to blatant violations such as copying an entire website. Even if the original content has been modified, such as an altered image or a reworded article, it is still considered plagiarism if no credit is given to the original source. We live in a time when most information is available in a digital format. While this makes it easier to access information than ever before, it also makes it easier to plagiarize other people's work. All it takes is a simple copy operation to copy large amounts of text or images from another source. This content can be pasted into a document or another publication in a matter of seconds. Anyone with a website can potentially republish the content for the whole world to see, without citing the original author. Because it is so easy to copy and paste digital information, plagiarism in the information age has become a serious problem. Fortunately, there are laws in place to protect against plagiarism. The most notable is international copyright law, which states that each individual's published work is automatically protected by copyright. This means others cannot copy the work without the author's approval and can be held liable for breaking the law if they do so. In 1996, the U.S. Senate passed the Digital Millennium Copyright Act (DMCA), which heightened penalties for copyright infringements on the Internet. Avoiding plagiarism is easy. It comes down to doing what's right. If you use someone else's information, make sure you cite the source. When writing a paper, this means adding APA or MLA citations when you reference other publications. When publishing a website, it means adding a reference and a link to the website where the information is from. If you need to reference a large amount of content from another source, you should contact the author and ask for permission. That way, you can make sure you use and reference the information appropriately. For information about citing content from TechTerms.com, please view the Citation Guidelines. Updated September 5, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/plagiarism Copy

Plain Text

Plain Text Plain text is digital text without any formatting. It includes only standard characters like letters, numbers, and punctuation symbols. Plain text lacks support for changing the font, text size, or style. It also does not allow other elements like tables or embedded images. It is one of two text types; the other, which does include formatting options, is known as rich text. Plain text requires less data than rich text, which makes it the most efficient way to store text. ASCII has historically been the primary encoding method for plain text, but modern formats like UTF-8 and UTF-16, which support a wider character set, are increasingly common. These encoding methods can store most individual letters or symbols using only 1 or 2 bytes each. Computer programs save source code files, HTML and other markup language files, log files, and configuration files as plain text. A plain text file open in the macOS TextEdit app Most operating systems include at least one built-in plain text editor. Windows includes Notepad and macOS includes TextEdit, while Unix and Linux often come with multiple editors like Emacs, Nano, Pico, and Vim. Word processors can create and open plain text files or convert them to rich text, allowing you to add formatting; they can also strip formatting away from rich text to convert it to plain text. Advanced text editors designed for software and web developers include special features that automatically highlight specific syntax in programming languages. For example, a text editor designed for a web developer will automatically highlight HTML tags, links, and other code blocks to help them stand out. NOTE: Plain text files use a wide variety of file extensions depending on their purpose. General text files use .TXT. Each programming and markup language uses its own extension, like .C, .PY, or .HTML. Common configuration file extensions include .CONF, .CFG, and .INI. Updated April 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/plaintext Copy

Platform

Platform A computing platform is the hardware and software on which an application runs. Hardware platforms are defined by the processor architecture, such as x86 or ARM. Software platforms are often determined by the operating system, such as Windows, macOS, and Linux. In many cases, "platform" and "operating system" are used interchangeably. However, since an operating system may run on more than one type of hardware, a platform may be further specified as a combination of hardware and software. For example, macOS on Intel hardware is one platform, while macOS on Apple Silcion is another. The combination of hardware with first and third-party software is called a "platform ecosystem." Mobile platforms are prominent examples. iOS and Android are two dominant mobile platforms that provide app stores with downloadable apps. Video game consoles are platforms with unique hardware and operating systems that run software specifically written for the platform. iOS and Android are two popular mobile platforms Virtual environments are also considered platforms. For example, the Java Runtime Environment (JRE) is a software platform since it runs Java applications and applets. A web browser may even be considered a platform since web applications can run within it. Programs that run on multiple platforms are known as cross-platform applications. Some are built on a framework designed for cross-platform apps, while others are written natively for each platform. Translating an app from one platform to another is called porting the application. Updated March 18, 2023 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/platform Copy

Plotter

Plotter A plotter is a printer designed for printing vector graphics. Instead of printing individual dots on the paper, plotters draw continuous lines. This makes plotters ideal for printing architectural blueprints, engineering designs, and other CAD drawings. There are two main types of plotters – drum and flatbed plotters. Drum plotters (also called roller plotters) spin the paper back and forth on a cylindrical drum while the ink pens move left and right. By combining these two directions, lines can be drawn in any direction. Flatbed plotters have a large horizontal surface on which the paper is placed. A traveling bar draws lines on the paper as it moves across the surface. Most drum and flatbed plotters provide output sizes that are much larger than standard inkjet and laser printers. For example, a typical inkjet printer creates documents that are 8.5 inches wide. A drum plotter may produce documents that are 44 inches wide. The length of a document printed by a drum plotter is only limited by the size of the paper. Documents printed by flatbed plotters are constrained to the length and width of the printing surface. NOTE: While pen plotters are still used today, most models have been replaced by wide-format inkjet printers. Updated July 1, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/plotter Copy

Plug and Play

Plug and Play Plug and Play, often abbreviated as "PnP," is an automatic process that configures computer peripherals as soon as they are connected. The user does not need to manually install drivers or configure device settings, as those tasks are handled instead by the operating system. Plug and play is the standard behavior for connecting new devices to Windows, macOS, and Linux computers, as well as mobile devices and tablets. When you attach a new plug and play peripheral, your computer performs several automatic steps. First, the operating system detects and identifies the device through an identification protocol (although the exact method varies depending on the interface). Next, it locates and installs the appropriate device driver from a local driver library or online repository. Finally, it configures the new peripheral by assigning necessary resources like memory addresses and interrupt requests (IRQs). For example, when you plug a new mouse into one of your computer's USB ports, the operating system automatically identifies which model mouse it is and who made it, and then installs the correct driver. Within a few seconds, the new mouse is ready to use. Most expansion interfaces on modern computers support plug and play, including external interfaces like USB, Thunderbolt, HDMI, and DisplayPort, as well as internal expansion slots like PCI Express and SATA. Devices connected over external interfaces are often hot-swappable, letting you connect and disconnect them without rebooting your computer. Adding new internal components requires you to turn off your computer first, but it will automatically recognize and configure PnP-compatible components once the computer boots. Updated September 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/plug_and_play Copy

Plug-in

Plug-in A plug-in is a small software component that adds new features to another program. They may add new functionality, modify the user interface, or allow the software to support additional file formats. First-party plug-ins may be packaged with an application, while third-party plug-ins can be made available as a free download or as separate commercial software. Many professional graphic design, video editing, and digital audio workstation applications include support for plug-ins. For example, a plug-in for Photoshop may add new filters and editing tools, while a plug-in for ProTools may add new synthesized instruments and audio effects. Media players use plug-ins to add support for new file formats and codecs. Web browsers use a type of plug-in called extensions to add new features. Unlike plug-ins for other types of applications, web browser extensions are distributed as source code instead of compiled software. Historically, web browser plug-ins like Adobe Flash distributed as executable software could lead to security vulnerabilities and software instability, so browser developers dropped plug-in support in favor of extensions. However, the purpose behind them is the same — to allow the user to customize the appearance and functionality of a web browser as they like. Updated February 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/plugin Copy

PMU

PMU Stands for "Power Management Unit." The PMU is a microcontroller, or integrated circuit, that controls the power functions of Macintosh computers. Though it is not a large component, the PMU contains several parts, including memory, software, firmware, and its own CPU. Some responsibilities of the PMU include: Telling the computer when to turn on, turn off, go to sleep, and wake up. Maintaining the system's PRAM (Parameter Random Access Memory). Managing system resets from various types of commands. Managing the real-time clock (date and time). Because every function the computer performs requires electrical power, the power management unit is an essential part of every Macintosh computer. Therefore, it is important that the PMU functions correctly. In the rare case that the PMU stops functioning or behaves erratically, it can be reset, which should fix any problems caused by the PMU. The method for resetting the PMU depends on the type of Macintosh computer. Some Macs have a small reset button on the logic board that can be pressed when the computer is off. Other models include a reset button on the outside of the computer. These buttons typically have an icon of a triangle pointing to the left, indicating it is a reset button. The PMU on some PowerBook G4 models can be reset by turning off the computer, removing the battery and power supply, and pressing Shift-Control-Option-Power. Since different machines require different methods for resetting the PMU, it is best to check your manual or Apple's Support website to find out the proper way to reset your Mac's PMU. In newer Macs, such as the MacBook and MacBook Pro, the PMU is referred to as the System Management Controller, or SMC. Updated November 2, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pmu Copy

PNG

PNG Stands for "Portable Network Graphics." PNG is a compressed raster graphic file format that is a standard format for web graphics. It is suitable for most images, including illustrations, icons, and photographs. The PNG format supports 24-bit true color, lossless compression, and transparency. In addition to JPEG and GIF, PNG is one of the most common image formats on the web. Introduced after JPEG and GIF had both become web standards, it includes features from both file types. It supports both 8-bit indexed color for low-color illustrations (like GIF) and 24-bit true color for photographs (like JPEG). It uses lossless compression to reduce file size without introducing any artifacts. However, this results in significantly larger files than lossy-compressed JPEG images. Unlike either GIF or JPEG, PNG supports the RGBA color model. This adds an optional alpha channel with 256 levels of transparency instead of the single on-or-off level supported by the GIF format. Designers can gradually fade an image or include a subtle drop shadow that looks natural no matter what is behind it. PNG supports true color for realistic photographs and multiple levels of transparency The PNG format was introduced in 1995 as a patent-free replacement for the GIF format. Since the GIF format used a patented form of data compression, the makers of software that supported GIF files were required to pay royalties. The PNG format was free to use and improved on several other shortcomings of the GIF format, but it was not widely supported by web browsers for several years after its introduction. NOTE: Two animated variants of the PNG format have been introduced: .MNG and .APNG. MNG failed to achieve widespread software support; APNG, while less common than the animated GIF format, is now supported by all major web browsers. File Extension: .PNG Updated May 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/png Copy

Podcast

Podcast A podcast is an episodic audio program delivered over the Internet. Listeners can subscribe to a podcast to get new episodes delivered automatically to their podcast player app, or they may download individual episodes without subscribing. The word "podcast" is a portmanteau of "iPod" and "broadcast," reflecting that podcasts are like digital, on-demand versions of radio shows. Podcasts can be produced by media companies, podcast networks, or individual amateurs. The barrier to entry is low enough that a single person with a computer and a microphone can create and distribute a podcast. However, like a radio show, many podcasts are produced by a team that includes hosts, writers, engineers, and editors. Most podcasts are distributed freely and supported by advertisements. Podcast makers release new episodes via RSS feeds, which include direct links to the audio file and metadata like titles, descriptions, and show notes. Dedicated podcast player apps read these RSS feeds and automatically download new episodes as they're released. The New York Times' The Daily podcast in the Apple Podcasts app Listeners can find a link to a podcast's RSS feed on the podcast's website or browse for shows in a player's podcast directory. Apple maintains the largest podcast directory through iTunes and its podcast app. However, audio streaming companies like Spotify or Audacy may create and host exclusive podcasts unavailable outside of their services. Podcasts funded by their subscribers directly, instead of through advertisements or corporate funding, often create custom RSS feeds for each subscriber through a service like Patreon. Podcast player apps include special audio playback controls designed for podcasts. For example, some podcasts include chapter markers that allow listeners to skip to certain parts of a show. Most allow listeners to adjust the playback speed to increase the pace and get through episodes faster, modify equalizer settings to make voices easier to hear, and create playlists of episodes from multiple podcasts. NOTE: Podcast episodes are typically saved as .MP3 files, although .AAC and .M4A files are also common. Updated June 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/podcast Copy

PoE

PoE Stands for "Power over Ethernet." PoE provides electrical current over an Ethernet connection. It powers electronic devices via Ethernet cabling without the need for batteries or a wall outlet. Multiple PoE standards exist, but they all provide both data and electrical power over a single cable. One method, known as "Alternative A," uses the same wires within the Ethernet cable to transmit power and data. "Alternative B" uses one set of wires for sending power and another set of wires for transmitting data. Technically, the PoE standards are defined as a subset of the IEEE 802.3 collection of standards. Examples include: 802.3af (2003) - supports 15.4 watts 802.3at (2009) - a.k.a "PoE+ or POE Plus;" supports 25.5 watts 802.3bt (2018) - Type 3 supports 55 watts; Type 4 supports 90 watts In order to power a device via Ethernet, a PoE adapter is required. This device, also called an "Ethernet injector," plugs into a standard power outlet and provides power to one or more Ethernet ports. A broad range of PoE devices exist, including: VoIP phones Wireless access points Audio amplifiers Lighting controllers Security cameras Updated May 17, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/poe Copy

PON

PON Stands for "Passive Optical Network." A PON is a telecommunications network that transmits data over fiber optic lines. It is "passive" since it uses unpowered splitters to route data sent from a central location to multiple destinations. PONs are used by ISPs and NSPs as a cost-effective way to provide Internet access for customers. Since a PON is point-to-multipoint (P2MP) system, it provides a more efficient way to transmit data than a point-to-point network. The main transmission line can split off into 32 separate lines, which requires far less infrastructure than building direct lines to each destination. The central location of a PON is also called the optical line terminal (OLT), while the individual destinations are called optical network units (ONUs). Lines that terminate outside buildings are called fiber-to-the-neighborhood (FTTH) or fiber-to-the-curb (FTTC). Lines that extend all the way to buildings are called fiber-to-the-building (FTTB), or fiber-to-the-home (FTTH). While all PONs use optical cables and unpowered splitters, there are several different versions. Below is a list of different types of PONs. APON - an early implementation (from the mid-1990s) that uses asynchronous transfer mode (ATM) to transfer data BPON - the first "broadband" PON that supports data transfer rates of 622 Mbps, the same speed as an OC-12 (STM-4) line GPON - a "gigabit-capable PON" that supports 2.488 Gbps downstream and 1.244 Gbps upstream; also called the ITU G.984 standard EPON - the most popular PON implementation; transmits data as Ethernet frames at up to 10 Gbps downstream and upstream; also known as GEPON or the IEEE 802.3 standard Updated April 17, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pon Copy

Pop-Up

Pop-Up The term "pop-up" has two computer-related meanings. One refers to a window and the other is a type of menu. 1. Pop-Up Window A pop-up window is a type of window that opens without the user selecting "New Window" from a program's File menu. Pop-up windows are often generated by websites that include pop-up advertisements. These ads are produced with JavaScript code that is inserted into the HTML of a Web page. They typically appear when a user visits a page or closes a window. Some pop-up ads show up in front of the main window, while others show up behind the main browser window. Ads that appear behind open windows are also called "pop-under" ads. Regardless of where pop-up advertisements appear on your screen, they can be pretty annoying. Fortunately, browser developers have realized this and most Web browsers now include an option to block pop-up windows. If you are noticing pop-up windows appear on your computer when your browser is not open, you may have an adware program running on your computer. The best solution to this problem is to run an anti-spyware program that will locate and remove the malware from your system. 2. Pop-Up Menu A pop-up menu is a type of menu that pops up on the screen when the user right-clicks a certain object or area. It can be also called a contextual menu since the menu options are relevant to where the user right-clicked on the screen. Pop-up menus provide quick access to common program functions and are used by most operating systems and applications. Updated October 7, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/popup Copy

POP3

POP3 Stands for Post Office Protocol version 3. POP3 is a legacy protocol for receiving emails. When configuring a new email account, you may have two options: POP3 or IMAP. POP3, the older protocol, downloads messages and removes them from the mail server. IMAP, a newer protocol, syncs messages on the client device with the server. What does "POP3" Mean? Post Office: Just like your letters and packages await us in a post office box, POP3 acts as a temporary storage place for your emails until you fetch them. Protocol: Think of it as a set of rules, like the guidelines on how to address an envelope, ensuring your mail (in this case, email) arrives correctly. Version 3: POP3 is the third and latest version of this protocol, refined over time for improved mail retrieval. When using POP3, emails are downloaded to your device and removed from the mail server. This process is similar to collecting mail from a post office box. Once you pick up your mail, it's no longer stored at the post office. POP3 downloads email messages and removes them from the server Today, it is common for users to synchronize email across multiple devices, such as a laptop, tablet, and smartphone. Therefore, POP3's download-and-remove method is not ideal. For instance, if you check your email on a computer, the downloaded emails will not appear on other devices, like your phone or tablet. IMAP (Internet Message Access Protocol) offers a distinct advantage since it keeps emails on the server after you read them, allowing for consistent access across various devices. While POP3 was the standard email retrieval protocol from the 1970s through the 1990s, IMAP has largely replaced it. Since IMAP provides better syncing across devices, many email clients and webmail systems use IMAP as the default option. Updated April 12, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pop3 Copy

Port

Port In the computer world, the term "port" has three different meanings. It may refer to 1) a hardware port, 2) an Internet port number, or 3) the process of porting a software program from one platform to another. 1. Hardware Port A hardware port is a physical connection on a computer or another electronic device. Common ports on modern desktop computers include USB, Thunderbolt, Ethernet, and DisplayPort. Previous generations of computers used different ports, such as serial ports, parallel ports, and VGA ports. Mobile devices often have only one port. For example, an iPhone or iPad may have a single Lightning connector. Android devices often have a USB-C port. The purpose of a hardware port is to provide connectivity and/or electrical power to a device. For example, the USB ports on a computer can be used to connect keyboards, mice, printers, or other peripherals. The USB-C port on a smartphone may be used to charge the device and sync it with a PC. NOTE: A hardware port may also be called an interface, jack, or connector. 2. Internet Port Number All data transmitted over the internet is sent and received using a specific set of commands, also known as a protocol. Each protocol is assigned a specific port number. For example, all website data transferred over HTTP uses port 80. Data sent over HTTPS uses port 443. Other common ports include: Port 20 - FTP (file transfer protocol) Port 22 - SSH and SFTP Port 25 - SMTP (outgoing email) Port 465 - SMTP over SSL Port 143 - IMAP (incoming email) Port 993 - IMAP over SSL Port numbers are similar to wireless channels in that they prevent conflicts between different protocols. They also provide a simple way to implement network security measures, since it is possible to allow or block specific protocols. 3. Porting Software "Port" may also be used as a verb. Porting software means taking an application written for one platform and making it work on another one. For example, a Windows program may be ported to macOS. An iOS app may be ported to Android. In order to port a program from one platform to another, it must be written for the corresponding hardware and operating system. Programs built using a universal development environment may be relatively easy to port, while programs that rely heavily on an operating system's API may have to be completely rewritten. Updated September 19, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/port Copy

Portable Software

Portable Software Portable software is software that runs from a removable storage device, such as a USB flash drive. It does not need to be installed on a computer to run and does not store data on the host system. Instead, all user settings and related files are stored on the removable drive. Portable software can refer to 1) individual portable applications or 2) portable software platforms. 1. Individual portable applications Applications that run from external media without requiring installation are considered portable apps. For example, LibreOffice and OpenOffice are both available in "portable" versions that can be run from a flash drive. Examples of other portable apps include Blender Portable – a 3D modeling application, Eclipse Portable – a software IDE, and Media Player Classic Portable – a media playback program. All of these programs are available in both standard and portable versions. 2. Portable software platforms Since there are many portable applications available, it is common to include multiple apps on one removable drive, managed by a single portable platform. By consolidating all your portable apps on one device, you can bring your desktop applications – along with all your files and settings – with you wherever you go. The most popular portable platform is PortableApps.com, which supports over 300 portable applications. Other portable platforms include Ceedo, winPenPack, and SmartKey, the successor to the U3 platform developed by SanDisk. The main benefit of portable software is that you can run applications on any compatible machine. However, it also provides a security benefit since all your settings are stored on the removable storage device. This provides a means of "sandboxing," which prevents unwanted files or malware from being written to your device. Updated July 2, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/portable_software Copy

Portal

Portal Web portals come in two flavors — internal and external. Internal portals are designed to provide services within an organization, while external portals serve as a starting point for browsing the web. Internal Portal Many businesses have a company portal, which is a website designed to provide communication and services to employees. For example, it may include a calendar of events, such as meetings, product releases, and other important dates. It may also provide links to employee documents, such as the employee handbook, tax forms, and FAQs for common job-related questions. Company portals often include a way for employees to communicate with each other, either through live messaging or email. Internal portals often exist within an intranet, which is a private network. This means only authorized users can access the portal. In most cases, you must enter valid login information in order to use an internal portal. While some portal services are installed within a local network, others are hosted by remote servers. Remotely-hosted portals may not offer as much customization as locally installed ones, but they also require less maintenance. Both types of portals can be accessed with a standard web browser. External Portal An external portal is a website designed to be a starting point on the web. It consolidates links to different subjects and related articles, simplifying the web browsing process. Yahoo, for example, is a well-known portal that includes categories such as News, Sports, Finance, Shopping, Movies, Politics, Tech, and others. Each of these topics include links to news stories, both internal and external, to Yahoo! that you can read. In the early days of the Internet, portals were a common way for people to browse the web. However, as search engine technology improved in the late 1990s, people started using search engines more and portals less. As a result, web portals have faded in popularity over the past few decades and only a few remain. NOTE: A portal is different than a search engine, but most portals include some type of search feature. Yahoo! is both a portal and a search engine since it includes a comprehensive web search capability. Google is primarily a search engine and was not originally designed to be a portal. However, since Google now provides services like YouTube, Google Maps, Google Drive, Gmail, News, and others from the Google home page, it is also a portal. Updated September 29, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/portal Copy

POST

POST Stands for "Power On Self Test." POST (or P.O.S.T.) is a series of system checks run by computers and other electronic devices when they are turned on. The results of the test may be displayed on a screen, output through flashing LEDs, or simply recorded internally. On computer systems, the POST operation runs at the beginning of the boot sequence. If all the tests pass, the rest of the startup process continues automatically. Both Macs and Windows PCs run a POST each time the computer is booted up or restarted. The scan checks the hardware and makes sure the processor, RAM, and storage devices are all functioning correctly. If an error is encountered during the POST, the startup process may pause or halt completely and the error may be displayed on the monitor. On PCs, POST errors are often displayed on the BIOS information screen. They may be output as cryptic codes, such as "08" for bad memory, or as a system message, such as "System RAM failed at offset." On Macs, POST errors are often indicated by a simple graphic, such as a broken folder icon that indicates no bootable device was found. In some cases, the computer screen may not even turn on before POST errors take place. If this happens, error codes may be output through flashing LED lights or audible tones. For example, an Apple iMac will sound three successive tones, followed by a five second pause, then repeat the tones when bad RAM is encountered during startup. Most PCs also emit beeps when POST errors are detected, though each manufacturer uses its own codes. POST is rather technical term that only computer technicians use on a regular basis. However, it is a good acronym to know, since it will help you better understand error messages that may pop up on computers or other electronic devices. If your computer won't start up because of a POST error, you can use a different device to look up the meaning and cause of the error, possibly from the manufacturer's website. Then you can take the appropriate action, such as removing a memory module or reseating the video card, then you can try starting up your computer again. NOTE: "POST" is also a method for passing HTML form variables from one webpage to another without displaying them in the address bar. The alternative method is "GET," which appends the values to the URL. Updated November 24, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/post Copy

PostScript

PostScript PostScript is a page description language that describes a page's text, graphics, and layout. PostScript documents can display the same content rendered in pixels on a screen and as high-DPI printed output. PostScript-compatible desktop publishing software and printers helped start the desktop publishing revolution in the 1980s, although later technologies like PDF have largely replaced PostScript for electronic document distribution. Before the introduction of PostScript, typesetting was a manual process that involved assembling text and graphics on a pasteboard, then photographing the layout and transferring it to a typesetting plate for reproduction. Computerized typesetting systems were expensive and complicated to operate, making it difficult for designers to lay out pages. In 1984, Adobe developed PostScript as a device-independent page description language that drew a page's contents using mathematically defined vector shapes. PostScript documents could be designed on a personal computer (like the Apple Macintosh), and would print the same on any PostScript-compatible printer. Designers could easily share documents between computers, desktop publishing applications, and printers. Another aspect of the PostScript format is the use of PostScript fonts. Before PostScript, computers would use bitmap fonts to display text on the screen, while printers would use a separate outline font kept in the printer's memory for printed text. If you used a bitmap font in a document, but your printer did not have the outline version of that font installed, it would result in blocky and pixelated text. PostScript fonts (installed through the Adobe Type Manager utility) used a simplified version of the PostScript language and rendering engine to draw text characters on screen based on the outlines included in the font file. When you printed the document, the printer used those same outlines to render the text at the printer's native resolution for high-quality text. File extensions: .PS Updated October 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/postscript Copy

POTS

POTS Stands for "Plain Old Telephone Service." POTS is the traditional telephone system that has been around for nearly 150 years. It operates over copper wires and was designed for analog signal transmission. For most of the past century, POTS has been the primary telecommunications medium. The infrastructure, such as telephone poles and telephone wires, can be found throughout the United States and around the world. Over the past few decades, digital alternatives, such as VoIP and cellular service, have largely replaced traditional phone plans. Many telephone poles now carry digital fiber optic cables alongside the copper wires. Some include public Wi-Fi repeaters and cellular transmitters. Since telephone poles often carry many types of wires, including power lines, they are now generally referred to as "utility poles." While POTS is gradually becoming obsolete, there are a few advantages to the traditional phone system: High reliability (roughly 99.999%) No external electrical power required (copper wires carry low voltage) Large existing global infrastructure Ability to handle analog and digital signals Clear, uncompressed voice audio Some disadvantages of POTS vs modern telecommunications systems include: No wireless option (besides in-home handsets) Expensive, especially for international calls Limited bandwidth for transmitting digital signals (and therefore low speeds) Requires significant above-ground infrastructure Audio frequency range is limited between 300 and 3,300 Hz POTS vs Landlines The terms "POTS" and "landline" are sometimes used interchangeably. Often, they mean the same thing. However, POTS refers to a phone system that operates over analog telephone wires. A landline may either use POTS or a digital voice service like Vonage or Comcast Digital Voice. Therefore, POTS requires a landline, but a landline may or may not use POTS. Updated August 13, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pots Copy

Power Cycle

Power Cycle To power cycle a device is to turn it off and on again. It is one of the most useful troubleshooting steps you can perform when a device is not working as it should. Power cycling a device is similar to rebooting it, but typically includes a short wait of 5 to 10 seconds where the device is completely powered down before turning it back on. The point of power cycling a malfunctioning device is to clear the contents of its volatile memory, including its RAM. Corrupt or unexpected data in a device's RAM often causes it to crash or fail to perform tasks correctly, so clearing its contents can often resolve unexpected behavior. Power cycling a networked device also resets all network connections to other devices in case of any addressing conflicts. For example, power cycling a modem resets its memory, which clears its caches and temporary data; it also resets its network connection to the ISP, which grants it a new IP address and may help resolve connectivity problems. While power cycling can help resolve problems with most electronic devices, you should be careful when power cycling a computer suddenly without good reason. Since it clears the contents of the system's RAM, any unsaved work will be lost. Many computers perform extra hardware checks after an unexpected shutdown, so it may take longer to reboot. If a computer unexpectedly shuts down while writing a file to a disk, that file may be corrupted and unreadable when the computer restarts. Updated November 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/power_cycle Copy

Power Supply

Power Supply A power supply is a hardware component that supplies power to an electrical device. It receives power from an electrical outlet and converts the current from AC (alternating current) to DC (direct current), which is what the computer requires. It also regulates the voltage to an adequate amount, which allows the computer to run smoothly without overheating. The power supply an integral part of any computer and must function correctly for the rest of the components to work. You can locate the power supply on a system unit by simply finding the input where the power cord is plugged in. Without opening your computer, this is typically the only part of the power supply you will see. If you were to remove the power supply, it would look like a metal box with a fan inside and some cables attached to it. Of course, you should never have to remove the power supply, so it's best to leave it in the case. While most computers have internal power supplies, many electronic devices use external ones. For example, some monitors and external hard drives have power supplies that reside outside the main unit. These power supplies are connected directly to the cable that plugs into the wall. They often include another cable that connects the device to the power supply. Some power supplies, often called "AC adaptors," are connected directly to the plug (which can make them difficult to plug in where space is limited). Both of these designs allow the main device to be smaller or sleeker by moving the power supply outside the unit. Since the power supply is the first place an electronic device receives electricity, it is also the most vulnerable to power surges and spikes. Therefore, power supplies are designed to handle fluctuations in electrical current and still provide a regulated or consistent power output. Some include fuses that will blow if the surge is too great, protecting the rest of the equipment. After all, it is much cheaper to replace a power supply than an entire computer. Still, it is wise to connect all electronics to a surge protector or UPS to keep them from being damaged by electrical surges. Updated January 28, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/powersupply Copy

Power User

Power User A power user is someone with better-than-average computer knowledge and skills, who demands more performance and efficiency from their computer. The term itself is subjective and can refer to computer professionals (like IT administrators and software developers) or simple enthusiasts. Power users tend to view their computers as a hobby instead of just a tool, finding enjoyment in tinkering and customizing settings to make their workflows easier and more efficient. Most computer users use their computers for basic tasks like web browsing, gaming, creative projects, and office work — tasks that don't require a high-end computer. Power users, on the other hand, are often not satisfied with an average computer and instead aim for faster performance through a combination of higher-end components and software customization. Power users typically install more third-party software than usual, including utility applications like Microsoft Powertoys that add new system features and change how existing features work. Many utilities like Microsoft Powertoys are made for power users to add and customize new system features Not every power user is the same, and how they customize their computers will vary based on their needs and interests. For example, some power users may focus their attention on increasing gaming performance, while others may enjoy setting up a home file server. Power users are often comfortable using a computer's command-line interface to issue commands. They will dig into an operating system's settings to customize how it looks and works, and may even manually edit configuration files. They tend to be more familiar with keyboard shortcuts, automation tools, and other ways to streamline tasks. They are often experienced with multiple operating systems and may configure their computers to dual boot. Since power users often better understand how different computer components work and interact, they tend to be comfortable upgrading their computer's hardware. Many power users assemble their own computers from parts instead of buying pre-assembled machines and may overclock their CPUs to push performance past the official specifications. Some power users (typically gamers) customize the appearance of their computer's chassis as well, adding color-changing LED lights that provide another thing to tinker with. Updated November 29, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/power_user Copy

Powerline Network

Powerline Network A powerline network is a way to use a building's existing electrical wiring to bridge multiple segments of a local area network. Using one is a simple way to create new wired network connections without the hassle of running new Ethernet cables. Powerline networks extend a home network to places Wi-Fi struggles to reach. A brick wall can interfere with a Wi-Fi signal, but creating a powerline network that runs from one side of the wall to the other gets around that interference. Powerline networks are slower than Ethernet and MoCA networks but can provide a stable network extension when those options are unavailable. A pair of TP-Link powerline network adapters Using a powerline network involves using two or more powerline adapters. First, plug one adapter into a standard electrical wall outlet and connect it by Ethernet to a router. Plug another adapter into a wall outlet in an area that needs a network connection, then use the adapter's Ethernet port to connect a device, like a computer, wireless access point, or network switch. The adapters will then bridge the networks using the electrical wiring in the walls. NOTE: Powerline networks have some significant limitations that are important to know. Speeds claimed by adapter manufacturers only apply to ideal conditions, which are unlikely to exist in real-world use. The quality and layout of the electrical wiring have an impact, and speeds will be faster between two adapters on the same circuit than between two adapters on different ones. Appliances that draw significant power, like a dryer, can cause interference while running. Updated October 6, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/powerline_network Copy

PowerPoint

PowerPoint PowerPoint is a presentation program developed by Microsoft. It is included in the standard Office suite along with Microsoft Word and Excel. The software allows users to create anything from basic slide shows to complex presentations. PowerPoint is often used to create business presentations, but can also be used for educational or informal purposes. The presentations are comprised of slides, which may contain text, images, and other media, such as audio clips and movies. Sound effects and animated transitions can also be included to add extra appeal to the presentation. However, overusing sound effects and transitions will probably do more to annoy your audience than draw their attention. (Yes, we have all heard the car screeching noise enough times for one lifetime.) Most PowerPoint presentations are created from a template, which includes a background color or image, a standard font, and a choice of several slide layouts. Changes to the template can be saved to a "master slide," which stores the main slide theme used in the presentation. When changes are made to the master slide, such as choosing a new background image, the changes are propagated to all the other slides. This keeps a uniform look among all the slides in the presentation. When presenting a PowerPoint presentation, the presenter may choose to have the slides change at preset intervals or may decide to control the flow manually. This can be done using the mouse, keyboard, or a remote control. The flow of the presentation can be further customized by having slides load completely or one bullet at a time. For example, if the presenter has several bullet points on a page, he might have individual points appear when he clicks the mouse. This allows more interactivity with the audience and brings greater focus to each point. PowerPoint presentations can be created and viewed using Microsoft PowerPoint. They can also be imported and exported with Apple Keynote, Apple's presentation program for the Macintosh platform. Since most people prefer not to watch presentations on a laptop, PowerPoint presentations are often displayed using a projector. Therefore, if you are preparing a PowerPoint presentation for a room full of people, just make sure you have the correct video adapter. Updated May 5, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/powerpoint Copy

PowerShell

PowerShell PowerShell is a software automation technology developed by Microsoft. It includes a command-line shell and a corresponding scripting language. Microsoft introduced PowerShell for Windows in 2006 to address the shortcomings of DOS. In 2016, Microsoft made the technology open-source and multiplatform, providing versions for Windows, Linux, and macOS. Microsoft PowerShell is similar to the standard DOS command prompt, but is built on the .NET framework. Unlike the DOS COMMAND.COM (or "CMD") shell, PowerShell accepts and returns .NET objects. It supports standard system commands and "cmdlets," which are specific to the PowerShell scripting language. Each cmdlet is a verb-noun pair, such as the examples below: Get-Command – Display a list of all available commands Get-Help – Display a help page describing how to use a specific command Get-Location – Show the current directory Rename-Item – Change the filename of a file Restart-Computer – Restart the computer While PowerShell's syntax is different than the standard DOS shell, it is backward-compatible with many CMD commands. For example, dir (and ls) are aliases for Get-ChildItem, which lists the contents of the current directory. PowerShell runs in a command-line interface, similar to DOS, but is designed for automation. For example, the Windows command prompt can only run one command at a time, while PowerShell supports scripts that contain multiple commands. PowerShell is also extensible, allowing it access to several different APIs. Developers can use PowerShell to automate other Microsoft technologies, such as Azure and SQL Server, as well as third-party software, including AWS, Google Cloud, and VMware. Updated September 17, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/powershell Copy

PPC

PPC Stands for "Pay Per Click." PPC is a term used in online marketing, referring to an advertising strategy that links payments for online advertising to the number of times people click an ad. PPC ads commonly appear on search engine results pages and social media feeds. An advertiser's goal when running a PPC campaign is to bring interested users to a website to sell a product or service. Search engines and social media sites usually run PPC advertisements through an auction model. When the site shows an ad, it runs an instant auction to decide whose ad to show. First, an advertiser sets a maximum cost-per-click (CPC) that they're willing to pay. They also choose the search keywords or social media demographics they want to target. The ad platform even looks at the quality of the ad itself—how relevant it is to the selected keywords, whether the landing page provides a good user experience, and the ad's previous click-through rate (CTR). This combination of factors means that the ad with the highest maximum CPC does not necessarily win since a poor-quality ad with the highest CPC can lose to a higher-quality ad with a slightly lower CPC. Ad networks also sell banner ads on other websites using a PPC model, but at a flat rate instead of a bid. The ad platform and the advertiser agree on a CPC before running the ads, and pages with higher traffic or a more valuable audience command a higher CPC. NOTE: PPC advertising is most used when an advertiser wants to drive traffic to their website. If the goal is to drive brand awareness, advertising campaigns can use a cost-per-mille (CPM) model that pays per thousand impressions (the number of times an ad is displayed instead of clicked). Other ad campaigns may run on a cost-per-lead (CPL) basis, paying when an interested potential customer signs up to learn about an offer. Updated October 14, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ppc Copy

PPGA

PPGA Stands for "Plastic Pin Grid Array." PPGA is a processor package that uses a plastic backing to support the pin grid array, a series of metal pins on the underside of the chip that fit into a socket on the motherboard. It differs from other types of pin grid array packaging by its use of plastic instead of ceramic or other materials. The plastic is low-cost and dissipates heat quickly, allowing processors to use more transistors and run at higher speeds. Intel used a PPGA packaging on several of its early Celeron processors. These CPU chips were produced between 1998 and 2000, and were popular for their power relative to Intel's more-expensive Pentium II series. They were also favored by overclockers, who found the chips to be stable when operated at faster speeds than Intel officially supported. The PPGA packaging was introduced when Intel began moving away from the slot-based chips of the early Pentium II and Celeron series. PPGA itself fell out of use after Intel introduced the FCPGA (flip-chip pin grid array) packaging with the Pentium III series in 2000. Over time, pin grid array packaging was replaced entirely by LGA (land grid array) packaging, where the pins are instead located in the socket itself and connect to flat contacts on the underside of the chip. Updated September 28, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ppga Copy

PPI

PPI Stands for "Pixels Per Inch." The resolution of a printed photo is often measured in DPI, or "dots per inch." The DPI describes how many dots of ink the printer prints per line per inch. Therefore, the higher the DPI, the greater the detail of the printed image. However, even if a photo is printed with a high DPI, the detail represented in the photo can only be as high as the PPI. PPI measures the number of pixels per line per inch in a digital photo. This number is directly related to the number of megapixels a digital camera can capture. For example, the original Canon Digital Rebel is a 6.3 megapixel camera and captures 2048 vertical by 3072 horizontal pixels. Therefore, when printing a 4x6 image, the PPI would be 3072 px. / 6 in. = 512 PPI. That is high enough to print a very detailed 4x6 photo. However, if you were to print a large 20x30 poster image from a 6.3 megapixel image, the PPI would be 3072 px. / 30 in. = 102.4 PPI. Most modern printers print images with a minimum resolution of 300 DPI. Therefore, if you print a photo with a PPI of less than 300, you may notice the image is not as sharp as you would like. Of course, the detail in a 20x30 image may not need to be as clear as a 4x6 photo. But a good rule of thumb is to keep your PPI above 300 so your prints will look nice and clear. Updated June 19, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ppi Copy

PPL

PPL Stands for "Pay Per Lead." PPL is similar to CPL, but measures the cost per lead from the advertiser's perspective. For example, if an advertiser pays $500 for 1,000 leads, the advertiser's average PPL is $0.50 ($500 ? 1000). Leads can be anything from basic page views to product purchases or new service signups. Leads that generate more revenue generally have a higher PPL. Advertisers often monitor PPL to measure the effectiveness of certain ads. By comparing the average revenue per lead to the PPL cost, the advertiser can determine if the ads are increasing or decreasing profit. For example, if a the average return on a lead is $0.80 and the PPL is $0.50, there is an average profit of $0.30 per ad. However, if the average return is less than $0.50, the ads should be modified or stopped since the leads cost more than the revenue they are generating. "PPL" is also used in online chat as an abbreviation for "people." Updated October 16, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ppl Copy

PPM

PPM Stands for "Pages Per Minute." PPM is used to measure the printing speed of both inkjet and laser printers. Most printers include a PPM rating for both black and color documents. These speed measurements are typically listed in the printer's technical specifications. While a higher pages per minute rating does indicate a faster printing speed, this measurement can be misleading. This is because manufacturers measure the maximum PPM in the fastest printing mode, a.k.a. "economy mode," which is also the lowest quality. When printing in regular mode, the speed may be twice as slow. When printing in fine or high-quality mode, the speed will likely be reduced even further. Furthermore, a printer's maximum PPM speed is measured using basic text pages, with no graphics, lines, or other objects. Therefore, if you have a text document that includes a picture, the page could take several times longer to print than a plain text document. If you are going to be printing a lot of color photos, make sure to check the printer's photo printing speed, which is often significantly slower than the printer's maximum PPM. Finally, the PPM measurement does not take into account how long it takes the printer to warm up and begin printing. Therefore, if you are only printing one or two pages, the warm up time may be longer than the actual time it takes to print the document. In summary, PPM gives a general idea of how fast a printer is. But since there are several other variables involved that determine a printer's speed, PPM does not always accurately reflect a printer's speed. Therefore, when choosing a printer, it may be helpful to read some reviews about the printer you are interested in. The reviews may give you a better idea of the printer's real-world speed an quality than the numbers on the box do. NOTE: PPM can also be used to measure a scanner's scanning speed. This measurement is particularly important for scanners that use an automatic document feeder ADF, which allows multiple documents to be scanned consecutively. Updated December 28, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ppm Copy

PPP

PPP Stands for "Point-to-Point Protocol." PPP is a protocol that enables communication and data transfer between two points or "nodes." For many years, PPP was the standard way to establish a dial-up connection to an ISPs. As dial-up modems were superseded by broadband devices, PPP connections became increasing. However, PPP lives on in "PPP over Ethernet" (PPPoE), which is a common way to connect to the Internet using a DSL modem. PPP is a data link protocol, which is the second layer of the seven-layer OSI model. It comes just after the physical layer and encapsulates the five layers underneath it. This means PPP can be used by multiple applications and may transfer data over multiple protocols, such as TCP and UDP. It commonly uses the Internet protocol (IP) to transfer data over the Internet. PPTP Because PPP encapsulates other protocols, it can be used for data tunneling, or securely transferring data within the PPP protocol. The Point-to-Point Tunneling Protocol (PPTP was designed for this purpose and is often used to create virtual private networks VPNs. However, PPP was not originally designed as a secure protocol and has some known security vulnerabilities. Therefore modern VPNs often use other protocols. PPPoE PPPoE, or PPP over Ethernet, is a standard way to connect to an ISP using a DSL modem. It allows you to connect your modem to a computer or router using a high-speed Ethernet port. The modem then establishes a point-to-point connection with the ISP. PPP supports authentication, so you may be asked to enter a username and password in the PPPoE settings. This information provides a simple way for your DSL Internet provider to confirm you are a valid subscriber. Updated September 25, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ppp Copy

PPPoE

PPPoE Stands for "Point-to-Point Protocol over Ethernet." PPPoE is a network configuration used for establishing a PPP connection over an Ethernet protocol. It is commonly used for creating DSL Internet connections. Since DSL modems typically connect to computers via an Ethernet connection, a standard dial-up PPP connection cannot be used. Therefore, PPP over Ethernet allows computers to connect to an Internet service provider (ISP) via a DSL modem. PPPoE is a network configuration option found in the Network control panel (Windows) or the Network system preference (Mac OS X). In order to create a PPPoE connection, you will need to enter the service name provided by the ISP as well as a username and password. This provides a simple way for the ISP to uniquely identify your system and establish your Internet connection. PPPoE can be contrasted to DHCP, which dynamically assigns unique IP addresses to connected systems and is typically used by cable Internet service providers. The biggest advantage of a PPPoE configuration is that it is easy to set up. It also supports multiple computers on a local area network (LAN). The downside of PPPoE is that it requires additional overhead, or extra data, to be sent over the Internet connection. A standard PPPoE connection adds 8 bytes of data to each packet transmitted. While this is only a small fraction of packets that have an MTU of 1500 bytes, some connections use packets as small as 60 bytes, which means PPPoE adds over 13% overhead. For this reason, many DSL providers now offer DHCP configurations as well. Updated August 17, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pppoe Copy

PPS

PPS Stands for "Pay Per Sale." PPS is a type of online advertising where a web publisher is paid a commission for each sale generated by his website. It is a more specific version of the CPA model and is commonly used in affiliate marketing. Most online ads use the PPC model, in which the website owner receives a small amount for each click on an advertisement banner or link. While these types of ads can generate lots of traffic for websites, they do not guarantee sales or conversions. By running PPS ads, merchants only pay for clicks that lead to sales. Since merchants only pay commissions on sales made from PPS ads, the commissions have to be high enough to make it worthwhile for web publishers to run them instead of PPC ads. Therefore, commissions over 50% are not uncommon for high-margin items, such as software and online services. Physical products with lower margins often have commissions under 10%. However, if they have high average order values (AOVs), sales with low commissions can still produce high payments for publishers. NOTE: To maximize revenue, many websites include both PPC and PPS ads. Updated May 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pps Copy

PPTP

PPTP Stands for "Point-to-Point Tunneling Protocol." PPTP is a networking standard for connecting to virtual private networks, or VPNs. VPNs are secure networks that can be accessed over the Internet, allowing users to access a network from a remote location. This is useful for people who need to connect to an office network from home or access their home computer from another location. The "point-to-point" part of the term refers the connection created by PPTP. It allows one point (the user's computer) to access another specific point (a remote network) over the Internet. The "tunneling" part of the term refers to the way one protocol is encapsulated within another protocol. In PPTP, the point-to-point protocol (PPP) is wrapped inside the TCP/IP protocol, which provides the Internet connection. Therefore, even though the connection is created over the Internet, the PPTP connection mimics a direct link between the two locations, allowing for a secure connection. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pptp Copy

PRAM

PRAM Stands for "Parameter Random Access Memory." PRAM is a type of memory found in some Mac computers that stores system settings. A Mac's PRAM stores several system parameters, like some display settings, the current time zone, and speaker volume. It is connected to a small backup battery to maintain its settings even when the computer is turned off and unplugged. Resetting the PRAM is a common troubleshooting step when a Mac is not working as expected. To reset the PRAM, first shut the computer down completely. Turn it back on by pressing its Power button, then immediately press and hold the Command, Option, P, and R keys on the keyboard. After about 20 seconds, it may appear that the Mac is restarting again; at this point, you can release the keys and allow the Mac to boot. Once it starts, you may need to reset some system settings, like the time zone and speaker volume. Press Command + Option + P + R during boot to reset a Mac's PRAM Modern Macs no longer use PRAM to store settings. Instead, they use a form of nonvolatile RAM (NVRAM) that performs the same function but without a backup battery. The process for resetting an Intel Mac's NVRAM is the same as it is for resetting an older Mac's PRAM. However, Apple Silicon-based Macs use their NVRAM differently, in a way that does not allow the user to reset it for troubleshooting. Updated April 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/pram Copy

Prebinding

Prebinding Prebinding is an optimization process that allows faster launching of applications in Mac OS X. Often, when a program is opened, it loads data from files called dynamic libraries. These libraries must be located each time a program is run since their memory addresses are usually undefined. When a program incorporates prebinding, the addresses of the library or libraries referenced by the program are predefined. This saves time by avoiding unnecessary searching each time the program is run. The prebinding process happens during the "Optimizing" stage of the program's installation. While prebinding make take some time, it is more efficient to do this process once, rather than each time the program is run. Prebinding is only possible with Mach-O executable programs, since CFM PEF binaries do not support prebinding. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/prebinding Copy

Prefetch

Prefetch Prefetching is an optimization technique that fetches data and loads it into memory before that data is actually requested. When successful, prefetching improves performance and reduces latency because an application does not need to read data from a slower storage disk but can instead access it from a faster memory cache. However, it requires that the system accurately predicts what data an application or process needs in advance. Prefetching can lead to wasted resources if the system makes incorrect predictions. In order to predict what data it needs to prefetch, a computer system analyzes usage patterns to see how users and applications typically request data. They may use simple pattern matching algorithms or advanced machine learning to predict future behavior. Most operating systems use this information to speed up application launch times by preloading data these apps need; this feature is called SuperFetch in Windows and Application Launch Services in macOS. Mobile operating systems like Android and iOS also use prefetching to speed up app launch times. Windows tracks what data applications need when they launch using Prefetch data files Software applications and hardware devices can both include built-in prefetching functions. CPUs often use prefetching to load the next set of execution instructions into the cache. GPUs prefetch texture images into VRAM if the system anticipates they are needed to render upcoming frames. Database systems often prefetch data that a query may request before that query is executed. Web browsers can even prefetch data from other webpages that are linked to from the current page, downloading scripts, images, audio, and video to help the next page load faster when you click a link. Updated November 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/prefetch Copy

Pretest

Pretest Before releasing products for sale, companies often pretest their products for safety and reliability. This process typically involves detecting and fixing errors and making improvements if necessary. In the computer industry, pretesting can be performed on both hardware and software. Computer hardware manufacturers often pretest their products rigorously before distributing them to the public. For example, a hard drive may be tested in extremely hot and cold temperatures to determine what the safe operating temperature of the hard drive is. A chip manufacturer may test a CPU under a maximum processing load for an extended period of time to make sure it does not overheat. Hard drives, CPUs, and RAM are all tested for reliability, since small errors can cause major problems. Modern computers can perform billions of calculations per second, so even one miscalculation per billion is considered highly unreliable. Software is also pretested before it is made available for sale. The amount of pretesting is generally determined by the size and complexity of the program. Initially, pretesting is done by the software development team, as the initial bugs are worked out. Then the software may go through an "alpha" phase, where the developers and possibly other users test the software. As the program nears completion, it may go through a "beta" phase, where additional users can test the software and provide feedback to the developers. This software is called beta software and may be distributed to a select group of users or the general public. Once the beta stage is complete, the software is ready for sale. Updated October 28, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pretest Copy

Primary Key

Primary Key A primary key is a unique identifier value for a database record in a table. It is typically a number, often generated sequentially as the database creates new records, but text strings may also be used as primary keys if they are guaranteed to be unique. Relational databases use primary keys to create links between records across separate tables. When you create a table, you have several options for assigning primary keys. First, you may choose a primary key from existing information in the record, like a customer phone number or Social Security Number; this kind of primary key is called a "natural key." A "surrogate key," on the other hand, is a value generated solely to serve as the primary key. A DBMS can generate a surrogate key sequentially as it adds new records, or generate a random number or UUID instead. If a DMBS must delete a record, it usually preserves that record's primary key and will not reuse it. A Customers table in a database using the customer_id field as a primary key Relational databases use primary keys to ensure that each record in a table is unique and to create links between records across tables. For example, a customer database could include a Customer Identification table with names and email addresses, and another Shipping Address table with street addresses. The Customer Identification table might use a generated UUID in the Customer ID field as a primary key, allowing customers to update their name without changing how the DBMS references that record. The Shipping Address table, meanwhile, saves every shipping address in the system as a separate record with a sequential ID number that increments with each new address. This allows a record in the Customer Identification table to associate each customer record with one or more addresses, using the primary key from the address table as a foreign key within a customer record. Updated December 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/primarykey Copy

Primary Memory

Primary Memory Primary memory is computer memory that is accessed directly by the CPU. This includes several types of memory, such as the processor cache and system ROM. However, in most cases, primary memory refers to system RAM. RAM, or random access memory, consists of one or more memory modules that temporarily store data while a computer is running. RAM is volatile memory, meaning it is erased when the power is turned off. Therefore, each time you start up your computer, the operating system must be loaded from secondary memory (such as a hard drive) into the primary memory, or RAM. Similarly, whenever you launch an application on your computer, it is loaded into RAM. The operating system and applications are loaded into primary memory, since RAM can be accessed much faster than storage devices. In fact, the data can be transferred between CPU and RAM more than a hundred times faster than between the CPU and the hard drive. By loading data into RAM, programs can run significantly faster and are much more responsive than if than constantly accessed data from secondary memory. NOTE: Primary memory may be called "primary storage" as well. However, this term is somewhat more ambiguous since, depending on the context, primary storage may also refer to internal storage devices, such as internal hard drives. Updated December 8, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/primary_memory Copy

Primitive

Primitive In computer science, a primitive is a fundamental data type that cannot be broken down into a more simple data type. For example, an integer is a primitive data type, while an array, which can store multiple data types, is not. Some programming languages support more data types than others and not all languages implement data types the same way. However, most high-level languages share several common primitives. Java, for instance, has eight primitive data types: boolean – a single TRUE or FALSE value (typically only requires one bit) byte – 8-bit signed integer (-127 to 128) short – 16-bit signed integer (-32,768 to 32,767) int – 32-bit signed integer (-231 to -231 -1) long – 64-bit signed integer (-263 to -263 -1) float – 32-bit floating point number double – 64-bit floating point number char – 16-bit Unicode character The string data type is generally considered non-primitive since it is stored as an array of characters. Primitives supported by each programming language are sometimes called "built-in data types" since they store values directly in memory. Non-primitive data types store references to values rather than the values themselves. Examples of non-primitive Java data types include arrays and classes. Updated May 23, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/primitive Copy

Print Server

Print Server A print server is a device that allows you to share a printer with multiple computers. It may be a standalone adapter or may be integrated within a printer or a router. When activated, the print server allows a printer to connect to a local network rather than a single computer. The printer can then be accessed by multiple devices (including both Mac and Windows computers) as a "network printer." Standalone print servers come in several varieties. Most have a USB port, which connects directly to the USB port of the printer. However, some print servers can connect to a printer's Ethernet or parallel port as well. Wired print servers include an Ethernet port for connecting directly to a router, while wireless versions are able to connect to a Wi-Fi network. Printers that include a built-in print server are often called "network printers" or "wireless printers." These printers may have an Ethernet port for connecting directly to a LAN or a built-in Wi-Fi card, which enables the printer to show up on a wireless network. Since many homes and businesses now have wireless networks, wireless printers have become a popular way to share a printer with multiple computers. Some routers can also function as print servers. Besides the typical Ethernet ports, they also include a USB port for connecting a printer. When you connect a printer to the router, the printer becomes a network device and can be accessed by other devices on the network. Updated December 14, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/print_server Copy

Printer

Printer A printer is an output device that prints paper documents. This includes text documents, images, or a combination of both. The two most common types of printers are inkjet and laser printers. Inkjet printers are commonly used by consumers, while laser printers are a typical choice for businesses. Dot matrix printers, which have become increasingly rare, are still used for basic text printing. The printed output produced by a printer is often called a hard copy, which is the physical version of an electronic document. While some printers can only print black and white hard copies, most printers today can produce color prints. In fact, many home printers can now produce high-quality photo prints that rival professionally developed photos. This is because modern printers have a high DPI (dots per inch) setting, which allows documents to printed with a very fine resolution. In order to print a document, the electronic data must be sent from the computer to the printer. Many software programs, such as word processors and image editing programs, include a "Print..." option in the File menu. When you select "Print," you will typically presented with a Print dialog box. This box allows you to select the print output settings before sending the document to the printer. After choosing the appropriate settings, you can hit the Print button, which will send the document to the printer. Of course for the document to print, the printer must be turned on and connected to the computer. Most modern printers are connected using a standard USB cable. However, some printers can be wirelessly connected to one or more computers over a Wi-Fi network. You can also use more than one printer on a single computer, as long as the correct drivers are installed. While printers are notorious for breaking down at inopportune times, modern printers are fortunately more reliable than the printers of the past. Of course, keeping extra ink or toner cartridges on hand is still your responsibility. Updated January 12, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/printer Copy

Private IP Address

Private IP Address A private IP address identifies a device on a local network (LAN). It is similar to a public IPv4 address, which identifies a device connected to the Internet, but is limited to a predefined range of IPs. Below are three common ranges of IP addresses reserved for private IPs: 10.0.0.0 – 10.255.255.255 172.16.0.0 – 172.31.255.255 192.168.0.0 – 192.168.255.255 10.0.0.1, for instance, cannot be used as a public IP address because it falls in the range of private IPs. Instead, 10.0.0.1 can only be assigned to a router, computer, or another device on a local network. Routers incrementally assign IP addresses to devices when they connect to the network, via DHCP. For example, a router may use the address 10.0.0.1 and assign 10.0.0.2 to the first device that connects to it. The following device will receive an address of 10.0.0.3, then 10.0.0.4, etc. 192.168.x.x vs 10.x.x.x Most local networks use the 192.168.x.x range of private IPs. The second most popular range is 10.x.x.x, followed by 172.32.x.x. Generally, the router manufacturer determines the default range. For example, Linksys and Netgear routers typically use 192.168.0.1 or 192.168.1.1 for the router IP address, and therefore the IP ranges of 192.168.x.x. Apple routers (now discontinued) used the 10.0.0.x range. The 172.31.x.x range is the least common but may be used by large networks, such as university campuses and office buildings. NOTE: Network address translation (NAT) translates the private IP address of each device on a network to the public IP address assigned by the ISP. Thanks to NAT, all devices connected to a router share the same public IP address. Updated May 18, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/private_ip_address Copy

Process

Process A process is an instance of a program currently running on a computer. It represents the active execution of a program by the CPU and can range from a small background task to a comprehensive application, like a word processor or web browser. Each process consists of one or more threads and contains the instructions found in the program's code and any resources it needs in its current state (like input, output, and intermediate data). Many applications split their tasks into child processes. For example, a web browser may start a separate process for each tab you open and each extension or plugin you install. Independent processes from different applications are generally not allowed to communicate directly in order to prevent conflicts and ensure data security. However, interprocess communication (IPC) techniques like sockets and message queues provide ways for processes to exchange information in a safe and controlled manner. Application processes and background processes in the Windows Task Manager Modern operating systems support multitasking, which allows multiple processes to run concurrently. Each CPU core can run one process at a time, allowing several to run in parallel. Since there are always more active processes than available cores, the operating system's task scheduler divides CPU time between processes based on priority, rapidly switching between them to keep them running almost simultaneously. Interactive processes tend to have the highest priority, so user input like keystrokes and mouse clicks are given resources immediately while background processes wait their turn. Each operating system also provides utilities that let the user see what processes are running on their computer and manually terminate one that is no longer responding. Windows users can invoke the Task Manager using a keyboard shortcut (Control + Alt + Delete). The macOS equivalent is the Activity Monitor app (found in the Applications / Utilities folder). Unix and Linux include a variety of system monitor utilities depending on the distribution, like the top and htop utilities or the GNOME System Monitor application. Updated December 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/process Copy

Processing

Processing Processing is a programming language designed for the visual arts community. It is open source and uses basic syntax for creating drawings, animations, and interactive programs. It also includes a basic IDE, which serves as the programming interface. Each program created in Processing is called a "sketch" and can be saved in a sketchbook. They are uncompiled source code files and are saved in a plain text format. Each sketch can be run within the Processing interface using the Sketch → Run command. The Sketch → Tweak command allows you to edit the program code in real-time as the sketch is running. The Sketch → Present runs the sketch as a full-screen application. Below is an example of Processing code that creates a window that is 640x480 in size and draws a rectangle near the upper left corner that is 300x200 in size. void setup() {   size(640,480); } void draw() {   rect(40,40,300,200); } Processing is built on Java, so Processing source code has similar syntax to Java. The Processing window is actually a Java program called PApplet, which is a Java class. Before Processing 3 was released (in 2015), it was possible to embed a PApplet into a Java application. The applet dependency was removed in Processing 3.0, so extra code is now required to embed PApplets into Java applications. While the Processing IDE is the standard way to create and run Processing sketches, they can also be built and run using other interfaces. For example, it is possible to use an IDE like Eclipse as long as Processing's core.jar file is imported. It is also possible to run Processing sketches directly in a web browser using the p5.js JavaScript library. Finally, there is a Processing.py library that allows Processing programs to be written and run in Python. Updated March 9, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/processing Copy

Processor

Processor A processor, or "microprocessor," is a small chip that resides in computers and other electronic devices. Its basic job is to receive input and provide the appropriate output. While this may seem like a simple task, modern processors can handle trillions of calculations per second. The central processor of a computer is also known as the CPU, or "central processing unit." This processor handles all the basic system instructions, such as processing mouse and keyboard input and running applications. Most desktop computers contain a CPU developed by either Intel or AMD, both of which use the x86 processor architecture. Mobile devices, such as laptops and tablets may use Intel and AMD CPUs, but can also use specific mobile processors developed by companies like ARM or Apple. Modern CPUs often include multiple processing cores, which work together to process instructions. While these "cores" are contained in one physical unit, they are actually individual processors. In fact, if you view your computer's performance with a system monitoring utility like Windows Task Manager (Windows) or Activity Monitor (Mac OS X), you will see separate graphs for each processor. Processors that include two cores are called dual-core processors, while those with four cores are called quad-core processors. Some high-end workstations contain multiple CPUs with multiple cores, allowing a single machine to have eight, twelve, or even more processing cores. Besides the central processing unit, most desktop and laptop computers also include a GPU. This processor is specifically designed for rendering graphics that are output on a monitor. Desktop computers often have a video card that contains the GPU, while mobile devices usually contain a graphics chip that is integrated into the motherboard. By using separate processors for system and graphics processing, computers are able to handle graphic-intensive applications more efficiently. Updated April 9, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/processor Copy

Processor Core

Processor Core A processor core (or simply “core”) is an individual processor within a CPU. Many computers today have multi-core processors, meaning the CPU contains more than one core. For many years, computer CPUs only had a single core. In the early 2000s, as processor clock speeds began plateauing, CPU manufacturers needed to find other ways to increase processing performance. Initially, they achieved this by putting multiple processors in high-end computers. While this was effective, it added significant cost to the computers and the multiprocessing performance was limited by the bus speed between the CPUs. By combining processors on a single chip, CPU manufactures were able to increase performance more efficiently at a lower cost. The individual processing units became known as “cores” rather than processors. In the mid-2000s, dual-core and quad-core CPUs began replacing multi-processor configurations. While initially only high-end computers contained multiple cores, today nearly all PCs have multi-core processors. NOTE: “Core” is also the name of Intel’s processor line, which replaced the Pentium lineup in 2006. Examples of Intel Core processors include the Core Duo, Core 2, Core i3, Core i5, and Core i7. Updated November 19, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/processor_core Copy

Program

Program Program is a common computer term that can be used as both a noun and a verb. A program (noun) is executable software that runs on a computer. It is similar to a script, but is often much larger in size and does not require a scripting engine to run. Instead, a program consists of compiled code that can run directly from the computer's operating system. Examples of programs include Web browsers, word processors, e-mail clients, video games, and system utilities. These programs are often called applications, which can be used synonymously with "software programs." On Windows, programs typically have an .EXE file extension, while Macintosh programs have an .APP extension. When "program" is used as verb, it means to create a software program. For example, programmers create programs by writing code that instructs the computer what to do. The functions and commands written by the programmer are collectively referred to as source code. When the code is finished, the source code file or files are compiled into an executable program. Updated January 31, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/program Copy

Programming Language

Programming Language A programming language is a set of commands, instructions, and other syntax use to create a software program. Languages that programmers use to write code are called "high-level languages." This code can be compiled into a "low-level language," which is recognized directly by the computer hardware. High-level languages are designed to be easy to read and understand. This allows programmers to write source code in a natural fashion, using logical words and symbols. For example, reserved words like function, while, if, and else are used in most major programming languages. Symbols like , ==, and != are common operators. Many high-level languages are similar enough that programmers can easily understand source code written in multiple languages. Examples of high-level languages include C++, Java, Perl, and PHP. Languages like C++ and Java are called "compiled languages" since the source code must first be compiled in order to run. Languages like Perl and PHP are called "interpreted languages" since the source code can be run through an interpreter without being compiled. Generally, compiled languages are used to create software applications, while interpreted languages are used for running scripts, such as those used to generate content for dynamic websites. Low-level languages include assembly and machine languages. An assembly language contains a list of basic instructions and is much more difficult to read than a high-level language. In rare cases, a programmer may decide to code a basic program in an assembly language to ensure it operates as efficiently as possible. An assembler can be used to translate the assembly code into machine code. The machine code, or machine language, contains a series of binary codes that are understood directly by a computer's CPU. Needless to say, machine language is not designed to be human readable. Updated September 23, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/programming_language Copy

Progressive Scan

Progressive Scan Progressive scan is a method of transmitting and displaying a video signal that draws every vertical line of the image with each screen refresh. It contrasts with interlaced video, which updates alternating vertical lines with each refresh (and requires less information at a time). Progressive scan video results in smoother motion and a more detailed picture than an interlaced video signal, particularly in action scenes with a lot of fast motion, and is the standard for streaming online video and DVD / Blu-ray discs. Interlaced video was originally used for standard television broadcasts since it requires less bandwidth to transmit only half the image at a time, but also due to how older televisions work. CRT televisions refresh the screen at 60 Hz, twice for each video frame filmed at 30 fps — once for odd-numbered lines and once for even-numbered lines. This reduces flickering and makes motion look smoother. However, modern LCD and OLED displays are able to update the screen more frequently and display each video frame for multiple passes, resulting in a more detailed image. When LCD and OLED screens display interlaced video, the television processes the signal to combine both interlaced passes into a single progressive scan image. At 60 Hz, a progressive scan image displays the entire image for two refreshes while an interlaced image updates alternating lines Modern television broadcasts may be either progressive scan or interlaced. Whether a video stream is a progressive scan or an interlaced scan is indicated by a letter following the number of vertical lines of resolution in the image. For example, the two most common television broadcast formats for HDTV are 720p and 1080i — 720p has 720 lines of resolution and is progressive scan, and 1080i has 1,080 lines of resolution and is interlaced. Both of these formats require approximately the same amount of bandwidth. Updated December 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/progressive_scan Copy

Progressive Web App

Progressive Web App A progressive web app (PWA) is a web application designed to provide an experience on par with a native application. A PWA uses modern web technologies like WebAssembly and JavaScript service workers to provide fast execution of code and the ability to run offline. Unlike a traditional web app, a progressive web app can even be installed to a computer or mobile device and run like any other app in its own window. Traditional web apps and progressive web apps both allow you to run an application in a web browser, but PWAs employ newer technologies and features that make them feel more like a native application. In addition to standard web technologies like HTML, CSS, and JavaScript, PWAs can use WebAssembly to run pre-compiled code (written in programming languages like C or Rust) within the web browser. They can also utilize special scripts, called service workers, that help cache data so that the PWA can operate without a network connection and provide support for notifications and app updates. Some desktop and mobile operating systems allow you to install a progressive web app on a device like a native app. All PWAs also include a JSON-formatted file called a web app manifest that provides metadata (like the app's name, icon, and URL) that an operating system can use to present the PWA as an app. An installed PWA may run in a separate standalone window instead of a browser, or appear separately in the app switcher on a mobile device. An installed PWA can also utilize local data storage that persists across sessions and even update itself in the background while it's not actively running. Updated July 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/progressive_web_app Copy

Projector

Projector A projector is an output device that projects an image onto a large surface, such as a white screen or wall. It may be used an alternative to a monitor or television when showing video or images to a large group of people. Projectors come in many shapes and sizes though they are commonly about a foot long and wide and a few inches tall. They can be mounted on ceilings or may be freestanding and portable. Ceiling-mounted projectors are typically larger, especially ones that project a long distance (such as 30 feet or more). These projectors are commonly found in classrooms, conference rooms, auditoriums, and places of worship. Portable projectors can used wherever there is a bright surface (such as a white or light colored wall). Most projectors have multiple input sources, such as HDMI ports for newer equipment and VGA ports for older devices. Some projectors support Wi-Fi and Bluetooth as well. High-quality projectors used to cost thousands of dollars and the bulbs alone could cost more than a hundred dollars. Modern advances in technology – especially LCD and LED light sources – have reduced the cost of a bright high-quality projector to only a few hundred dollars. Front vs Rear Projection Most projectors can be used for either front or rear projection. The difference is the screen, which is opaque white for front projection and semi-transparent grey for rear projection. Front projection sends the image from the position of the audience to the front of the screen. This is by far the most common method since it does not require an empty space behind the screen. Rear projection projects the image towards the audience from behind the screen. This method is less affected by ambient light and often provides better contrast. Rear projection is commonly used in outdoor settings and commercial areas where space is not an issue. Updated August 31, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/projector Copy

PROM

PROM Stands for "Programmable Read-Only Memory," and is pronounced "p-rom," not "prom" like the high school dance. PROM is a type of ROM that is programmed after the memory is constructed. PROM chips have several different applications, including cell phones, video game consoles, RFID tags, medical devices, and other electronics. They provide a simple means of programming electronic devices. Standard PROM can only be programmed once. This is because PROM chips are manufactured with a series of fuses. The chip is programmed by burning fuses, which is an irreversible process. The open fuses are read as ones, while the burned fuses are read as zeros. By burning specific fuses, a binary pattern of ones and zeros is imprinted on the chip. This pattern represents the program applied to the ROM. While PROM cannot be erased, two other versions of PROM have been developed that can be erased and reprogrammed. One type is called EPROM, or Erasable Programmable Read-Only Memory. This type of memory uses floating-gate transistors and can be erased by strong ultraviolet light. The other type is EEPROM, or Electrically Erasable Programmable Read-Only Memory. EEPROM can be erased with an electrical charge and is used in flash memory. Updated September 15, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/prom Copy

Proprietary Software

Proprietary Software Proprietary is an adjective that describes something owned by a specific company or individual. In the computing world, proprietary is often used to describe software that is not open source or freely licensed. Examples include operating systems, software programs, and file formats. 1. Operating Systems Proprietary operating systems cannot be modified by users or other companies. For example, Windows and OS X are both proprietary OSes. The Windows source code is owned by Microsoft and the OS X source code is owned by Apple. Other companies can make programs that run on these operating systems, but they cannot modify the OS itself. Linux and Android are not proprietary, which is why many different versions of these operating systems exist. 2. Software Programs Proprietary software programs are applications in which all rights are retained by the developer or publisher. They are typically closed-source, meaning the developer does not provide the source code to anyone outside the company. Proprietary programs are licensed to end users under specific terms defined by the developer or publisher. These terms often restrict the usage, distribution, and modification of the software. Most commercial software is proprietary because it gives the developer a competitive advantage. 3. File Formats Some file types are saved in a proprietary file format that can only be recognized by a specific program. These file types are typically created by proprietary software programs and are usually saved in a binary (rather than a text-based) format. By storing data in a proprietary format, developers can ensure that files created with their software cannot be opened with other software programs. Proprietary file types saved by software programs are also called native files. Updated June 20, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/proprietary_software Copy

ProRAW

ProRAW ProRAW is raw image format developed by Apple. It was introduced in 2020 and first supported by the iPhone 12 Pro models. The ProRAW file format saves uncompressed image data from the camera sensor, but also uses Apple's proprietary image processing. When taking a standard photo, an iPhone applies computational photo processing — automatically adjusting basic settings like white balance, exposure, and color saturation while also using machine learning to combine multiple exposures into a digital image. Most camera raw formats forego image processing entirely and save an uncompressed and unprocessed image directly from the camera's sensor. ProRAW combines these two techniques, saving an uncompressed image, while applying advanced processing techniques like Smart HDR and Night mode. It also captures more colors than standard image formats (12 bits vs 8 bits per channel) and allows for greater dynamic range. iPhones that support ProRAW still capture and process photos as lossy HEIC files by default. In order to capture ProRAW images, you must first enable the feature in iPhone Settings, then select ProRAW in the Camera app. Like other raw formats, a ProRAW image may look darker and less vibrant compared to an automatically-processed image. However, since the file contains more image data, a photographer can make precise adjustments to the color levels, saturation, and exposure without losing any detail. Since ProRAW files are uncompressed, they take up around five times as much storage space as a compressed HEIC or JPEG image. Photographers often export ProRAW images to one of these formats after editing to make them easier to share. NOTE: ProRAW files are saved using the digital negative (.DNG) image file format developed by Adobe. Updated January 11, 2023 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/proraw Copy

Protocol

Protocol A protocol is a standard set of rules that allow electronic devices to communicate with each other. These rules include what type of data may be transmitted, what commands are used to send and receive data, and how data transfers are confirmed. You can think of a protocol as a spoken language. Each language has its own rules and vocabulary. If two people share the same language, they can communicate effectively. Similarly, if two hardware devices support the same protocol, they can communicate with each other, regardless of the manufacturer or type of device. For example, an Apple iPhone can send an email to an Android device using a standard mail protocol. A Windows-based PC can load a webpage from a Unix-based web server using a standard web protocol. Protocols exist for several different applications. Examples include wired networking (e.g., Ethernet), wireless networking (e.g., 802.11ac), and Internet communication (e.g., IP). The Internet protocol suite, which is used for transmitting data over the Internet, contains dozens of protocols. These protocols may be broken up into four categories: Link layer - PPP, DSL, Wi-Fi, etc. Internet layer - IPv4, IPv6, etc. Transport layer - TCP, UDP, etc. Application layer - HTTP, IMAP, FTP, etc. Link layer protocols establish communication between devices at a hardware level. In order to transmit data from one device to another, each device's hardware must support the same link layer protocol. Internet layer protocols are used to initiate data transfers and route them over the Internet. Transport layer protocols define how packets are sent, received, and confirmed. Application layer protocols contain commands for specific applications. For example, a web browser uses HTTPS to securely download the contents of a webpage from a web server. An email client uses SMTP to send email messages through a mail server. Protocols are a fundamental aspect of digital communication. In most cases, protocols operate in the background, so it is not necessary for typical users to know how each protocol works. Still, it may be helpful to familiarize yourself with some common protocols so you can better understand settings in software programs, such as web browsers and email clients. Updated March 29, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/protocol Copy

Proxy Server

Proxy Server A proxy server is an intermediary server through which other computers access the Internet. It serves as a gateway that can monitor and filter all traffic that passes through it. Businesses, universities, and other large organizations often use proxy servers on their networks to improve security and make it easier to manage network traffic. When your computer uses a proxy server, it sends all Internet-bound traffic through the proxy server, which forwards it to the destination. Any response comes through that same proxy server, which sends it to your computer. There are several reasons that you or your organization may want to use a proxy server, and some of the most common reasons are listed below: Privacy: Computers requesting data through a proxy server will appear to the rest of the Internet using the proxy server's IP address instead of their own. Each request is anonymized, which makes it difficult for websites to track individual users and devices. Content filtering: A proxy server can analyze all incoming traffic and block certain websites and content. Many organizations block content that may be dangerous (like sites known to spread viruses) or workplace-inappropriate (like pornography) to keep the employees on their network safe. Caching: If the same website is requested multiple times by different devices, the proxy server can save it in its local data cache. It only needs to download data from the web server once before it can deliver it to everyone else from its cache — which is considerably faster than downloading it multiple times. Security: A proxy server may act as a firewall between an internal network and the Internet. It can inspect both incoming and outgoing network connections, blocking untrusted connections and allowing trusted ones through. Logging: Network administrators can access logs of incoming and outgoing data passing through the proxy server to monitor all network activity from one location. Geolocation: A proxy server set up to allow you to connect to it over the Internet can help mask your location. By connecting to a remote proxy server, you will appear to be in the proxy server's location instead of your own, which can help you access sites with location-restricted content. A proxy server sits between a client computer and the Internet, filtering and analyzing traffic in both directions Both proxy servers and Virtual Private Networks (VPNs) can act as intermediaries for network traffic, but they are not the same thing. A VPN connection is end-to-end encrypted, which prevents any other computers between the client and the VPN server from reading its traffic. A proxy server connection is not necessarily encrypted but instead focuses on other services, like content filtering and data caching. Updated August 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/proxy_server Copy

PS/2

PS/2 PS/2 is a type of port used by older computers for connecting input devices such as keyboards and mice. The port was introduced with IBM's Personal System/2 computer in 1987 (which was abbreviated "PS/2"). In the following years, the PS/2 port became the standard connection for keyboards and mice in all IBM compatible computers. The PS/2 port has six pins and is roughly circular in shape. Since each PS/2 port is designed to accept a specific input, the keyboard and mouse connections are typically color-coded. For example, the keyboard port on the back of the computer is often purple, while the mouse port is usually green. Similarly, the connector on the end of the keyboard cord is purple and the mouse cord connector is green. This makes it easy for all users to know where to plug the cables into the computer. The concept is similar to the color-coded composite audio/video connections on the back of a TV, which use red, white, and yellow connectors. While the PS/2 port enjoyed a good run for almost two decades, now most keyboards and mice use USB connectors. Unlike PS/2 ports, USB devices can be plugged into any USB port or even a USB hub and the computer will automatically determine what the device is. USB is also "hot swappable," meaning the connections can be removed while the computer is running. If you remove a PS/2 device while the computer is on, it may potentially cause damage to the hardware. Therefore, if you are using a PS/2 device, it is best to turn off the computer before connecting or unplugging a keyboard or mouse. NOTE: The term "PS2" is also a common abbreviation for Sony's PlayStation 2 game console. Updated June 24, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ps2 Copy

Pseudocode

Pseudocode Most software programs are developed using a programming language, like C++ or Java. These languages have a specific syntax that must be adhered to when writing program's source code. Pseudocode, on the other hand, is not a programming language, but simply an informal way of describing a program. It does not require strict syntax, but instead serves as a general representation of a program's functions. Since each programming language uses a unique syntax structure, understanding the code of multiple languages can be difficult. Pseudocode remedies this problem by using conventional syntax and basic english phrases that are universally understood. For example, a line of PHP code may read:if ($i < 10) { i++; } This could be written in pseudocode as:if i is less than 10, increment i by 1. By describing a program in pseudocode, programmers of all types of languages can understand the function of a program. Pseudocode is an informal language, so it is mainly used for creating an outline or a rough draft of a program. Because it is not an actual programming language, pseudocode cannot be compiled into an executable program. Therefore, pseudocode must be converted into a specific programming language if it is to become an usable application. Updated December 6, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pseudocode Copy

Public Domain

Public Domain Public domain is a legal term that describes a work or product that is not protected by copyright. The copyright protection an item in the public domain may have 1) expired, 2) been released by the author, or 3) never existed in the first place. Public domain items are publicly available and can be freely accessed and redistributed. Many different items can be labeled as "public domain." For example, books, speeches, poems, artwork, songs, and videos can all be made freely available to the public. In the computing world, "public domain" is often used to refer to software programs that are offered to the public without copyright restrictions. Public domain software is similar to open source software, in which the source code of a program is made publicly available. However, open source software, while freely distributed, still retains the original developer's copyright. This means the developer can change the redistribution policy at any time. Public domain software is also similar to freeware, which refers to software offered at no charge. However, like open source software, freeware programs are still protected by copyright. Therefore, users may not redistribute the software unless they receive permission from the original developer. Since there are many similarities between freeware, open source, and public domain software, the terms are often used interchangeably. However, there are important legal differences between the licenses, so it is important for developers to choose the correct license when releasing software programs. Public domain software, which offers the least legal protection, is most often published by individuals or educational institutions, rather than companies. When software is offered as public domain, it is often labeled "PD" or may include a Public Domain Mark (PDM). Updated September 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/public_domain Copy

PUK Code

PUK Code Stands for "Personal Unlocking Key Code." A PUK code (or just "PUK") is an 8-digit code used to unlock a locked SIM card. It is similar to a "password reset" in that it resets the SIM card PIN and allows you to choose a new one. The PUK is typically printed next to the default PIN on the larger card that came with your SIM card. If you cannot find the PUK code, your wireless carrier may be able to provide it. SIM Card PIN The 4-digit PIN prevents unauthorized use of a SIM card. You may need to enter the PIN when you activate a new phone, install a new SIM card, or transfer a SIM card to another phone. Depending on your smartphone's settings, it may require you to enter the PIN each time you start your phone. You can typically change the PIN on a mobile device by going to Security → SIM Secuirty in the device settings. The SIM card PIN differs from the code you choose to lock your phone. On most iPhones, SIM Security is OFF by default. You can turn SIM Security on or off on an iPhone by selecting Settings → Cellular → SIM PIN. If you enter your PIN incorrectly three times, you must enter the correct PUK to unlock the card. If you enter the wrong PUK code 10 times, the SIM card will become permanently locked, and you will need to replace it. NOTE: A PUK may also be called a PUC, or Personal Unblocking Code. Updated September 3, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/puk_code Copy

PUM

PUM Stands for "Potentially Unwanted Modification." A PUM is an unwanted change made to your computer's settings. PUMs can be performed by both legitimate applications and malware, though changes made by malware are more likely to cause serious problems. In some cases, you may not know about a PUM until after it has taken place. PUMs often modify settings at the system level. On Windows systems, this usually involves updating the Windows registry. In Mac OS X, a PUM may modify the System Preferences or the LaunchServices database. A common example of a potentially unwanted modification is when the default program is changed for one or more file types. Many applications set themselves as the default program for supported file types when they are installed. While most programs ask you if you would like them to be configured as the default application, some do not. Even programs that do ask for your permission may change more file associations than you expect. When a default program is changed, it may be a nuisance, but it will not cause serious problems and can easily be fixed. Other PUMs involve more complex changes and can create security problems with your computer. Examples include PUMs that modify your Internet security settings or change your user login preferences. These types of changes may be caused by viruses or spyware and should be fixed as soon as they are found. Installing an Internet security program on your computer is a good way to both prevent and fix unwanted modifications made to your system. NOTE: The term "PUM" was derived from "PUP," which is a potentially unwanted program. Updated September 6, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pum Copy

PUP

PUP Stands for "Potentially Unwanted Program." The term "PUP" was created by McAfee, a security technology company, to describe unwanted software. A PUP is similar to malware in that it may cause problems once it is installed on your computer. However, unlike malware, you consent to a PUP being installed, rather than it installing itself without your knowledge. Most PUPs are spyware or adware programs that cause undesirable behavior on your computer. Some may simply display annoying advertisements, while others may run background processes that cause your computer to slow down. The label "potentially unwanted program" is a fitting description of these applications because you may not find out about their obnoxious behavior until after they are installed. McAfee coined the term "PUP" to avoid labeling programs as malware, when users consent to downloading and installing them. However, the term is often seen as a euphemism for malware since most users want to remove PUPs immediately after they have been installed. Since PUPs are often installed along with legitimate applications, be careful not to agree to install extra programs if you don't know what they are. Updated September 6, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pup Copy

Push

Push Push refers to a system in which data is "pushed" to a device by a server rather than "pulled" or "fetched" after a request by the user. The server initiates the data transfer when new content is available without any action taken by the client device. One common use for push technology (also called "server push") is to automatically deliver email to a user's device as the server receives it instead of waiting for the user to check for new messages. Messaging services like SMS, iMessage, and WhatsApp also push data to a user's device instantly without waiting for a request. Apple, Microsoft, and Google include support in their mobile and desktop operating systems for push notifications. These notifications allow a remote server to push information through an application you have installed into the operating system's notification center. Web browsers like Chrome, Firefox, Edge, and Safari also support the web push protocol, which lets websites send push notifications through the browser to the user's notification center. Thankfully, websites cannot send notifications without your permission first. NOTE: Mobile operating systems like iOS and Android also support special civic push notifications, which can send severe weather warnings, missing person alerts, and other emergency notifications to people in a local area. Updated February 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/push Copy

PXE

PXE Stands for "Preboot Execution Environment" and is typically pronounced "pixie." PXE is a set of standards used to load an operating system over a network. It allows network administrators to load or reset system software via a network connection rather than physical media, such as a USB drive or DVD. Most computers come preloaded with an operating system (OS), which you can upgrade or restore if necessary. However, if a computer does not already have an OS, it must support installing an operating system from scratch. Older PCs provided this capability via the BIOS, while modern PCs have a Unified Extensible Firmware Interface (UEFI). For PXE to function, the client system must support "PXE Boot" in the BIOS/UEFI boot options. Enabling PXE Boot in the BIOS or UEFI readies a client machine to start up from a network connection. If PXE is prioritized over the local startup disk, the computer will first check for a signal from a server on the network via DHCP. Once the PXE connection is established, the server can send data to the client machine via TFTP. For example, the server may send the client boot files or a bootable disk image, which it can use to start up. PXE is especially useful in network environments where multiple systems are regularly refreshed or kept in sync. With PXE, a single server can simultaneously deploy an operating system to several clients, automating the refresh process. Updated March 7, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/pxe Copy

Python

Python Python is a high-level programming language designed to be easy to read and simple to implement. It is open source, which means it is free to use, even for commercial applications. Python can run on Mac, Windows, and Unix systems and has also been ported to Java and .NET virtual machines. Python is considered a scripting language, like Ruby or Perl and is often used for creating Web applications and dynamic Web content. It is also supported by a number of 2D and 3D imaging programs, enabling users to create custom plug-ins and extensions with Python. Examples of applications that support a Python API include GIMP, Inkscape, Blender, and Autodesk Maya. Scripts written in Python (.PY files) can be parsed and run immediately. They can also be saved as a compiled programs (.PYC files), which are often used as programming modules that can be referenced by other Python programs. Updated June 15, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/python Copy

QBE

QBE Stands for "Query By Example." QBE is a feature included with various database applications that provides a user-friendly method of running database queries. Typically without QBE, a user must write input commands using correct SQL (Structured Query Language) syntax. This is a standard language that nearly all database programs support. However, if the syntax is slightly incorrect the query may return the wrong results or may not run at all. The Query By Example feature provides a simple interface for a user to enter queries. Instead of writing an entire SQL command, the user can just fill in blanks or select items to define the query she wants to perform. For example, a user may want to select an entry from a table called "Table1" with an ID of 123. Using SQL, the user would need to input the command, "SELECT * FROM Table1 WHERE ID = 123". The QBE interface may allow the user to just click on Table1, type in "123" in the ID field and click "Search." QBE is offered with most database programs, though the interface is often different between applications. For example, Microsoft Access has a QBE interface known as "Query Design View" that is completely graphical. The phpMyAdmin application used with MySQL, offers a Web-based interface where users can select a query operator and fill in blanks with search terms. Whatever QBE implementation is provided with a program, the purpose is the same – to make it easier to run database queries and to avoid the frustrations of SQL errors. Updated February 26, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/qbe Copy

Qi Charging

Qi Charging Qi (pronounced "chee," from the Chinese word for "vital force" or "energy") is a wireless charging standard for smartphones and other mobile devices. Qi chargers use inductive charging to deliver electricity from a charging pad to a compatible mobile device over a short range — about 15 mm — without a physical connection between electrical contacts. It allows people to charge their smartphones without plugging and unplugging any USB cables, even through a protective phone case. Qi charging uses flat wound coils in both the charging pad and the mobile device to transfer power using inductive charging. First, an electrical current is run through the coils in the charging pad, generating an oscillating magnetic field. When you place a mobile device on the charging pad and the two sets of coils are properly aligned, that magnetic field induces an alternating current on the mobile device that provides power to charge its battery. Wireless chargers are available in a variety of form factors. Most are flat pads that a phone can lay flat on, but you can also find stands that can charge while a phone sits upright (in either portrait or landscape orientation). Some companies even integrate charging pads into desks, end tables, desk lamps, and other furniture. The convenience offered by wireless charging comes with a few drawbacks. First, you must align the coils in the phone and the charging pad closely, or it won't charge. Wireless charging also delivers power slower than plugging in a charging cable, so it's not ideal when you're in a hurry. Finally, wireless charging is less efficient than wired charging, with some of the electricity drawn going to waste as heat that can, over time, harm the phone's battery health. NOTE: Many Qi charging pads use USB cables to get power from computer ports and charging bricks, but not every port or charging brick is able to provide enough power. Check the requirements for your charging pad if it does not appear to charge your phone. Updated December 19, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/qi_charging Copy

QoS

QoS Stands for "Quality of Service." If there are a lot of devices on a network, all fighting for a limited amount of bandwidth, speed and latency can degrade for them all. Configuring QoS settings on the network's router can prioritize certain devices and types of traffic, ensuring that the most important data reaches its destination quickly. QoS features vary, depending on the capabilities of the router. Some routers can prioritize network traffic by type. For example, video calls and streaming media using the UDP protocol rely on their data packets arriving as fast as possible for a smooth experience, while other traffic like web browsing and file downloading use the TCP protocol that is less reliant on speed and low latency. Enabling QoS can prioritize time-sensitive services, giving them all of the bandwidth they need at the lowest latency possible. Less powerful routers lack the ability to prioritize by traffic type, but still can use QoS to prioritize certain devices on a network above others, by either IP or MAC address. When QoS is enabled, a router manages traffic by focusing on several factors. First, it allocates bandwidth to give certain traffic a larger percentage of available bandwidth over lower-priority traffic. It also manages how the router queues incoming and outgoing data packets, making lower-priority packets wait while the higher-priority ones get to move first. If there's significant congestion on the network and some data packets need to be dropped from the queue, QoS rules also drop the lowest-priority packets first. Updated September 26, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/qos Copy

QR Code

QR Code A QR code (short for "quick response" code) is a type of barcode that contains a matrix of dots. It can be scanned using a QR scanner or a smartphone with built-in camera. Once scanned, software on the device converts the dots within the code into numbers or a string of characters. For example, scanning a QR code with your phone might open a URL in your phone's web browser. All QR codes have a square shape and include three square outlines in the bottom-left, top-left, and top-right corners. These square outlines define the orientation of the code. The dots within the QR code contain format and version information as well as the content itself. QR codes also include a certain level of error correction, defined as L, M, Q, or H. A low amount of error correction (L) allows the QR code to contain more content, while higher error correction (H) makes the code easier to scan. QR codes have two significant benefits over traditional UPCs – the barcodes commonly used in retail packaging. First, since QR codes are two-dimensional, they can contain significantly more data than a one-dimensional UPC. While a UPC may include up to 25 different characters, a 33x33 (version 4) QR code, can contain 640 bits or 114 alphanumeric characters. A 177x177 (version 40) QR code can store up to 23,648 bits or 4,296 characters. Another advantage of QR codes is that they can be scanned from a screen. Standard UPC scanners use a laser to scan barcodes, which means they typically cannot scan a UPC from a screen (like a smartphone). QR scanners, however, are designed to capture 2D images printed on paper or displayed on a screen. This makes it possible to use a QR code on your smartphone as a boarding pass at the airport or as a ticket for an event. Updated March 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/qr_code Copy

Quad-Core

Quad-Core A quad-core CPU has four processing cores in a single chip. It is similar to a dual-core CPU, but has four separate processors (rather than two), which can process instructions at the same time. Quad-core CPUs have become more popular in recent years as the clock speeds of processors have plateaued. By including multiple cores in a single CPU, chip manufacturers can generate higher performance without boosting the clock speed. However, the performance gain can only be realized if the computer's software supports multiprocessing. This allows the software to split the processing load between multiple processors (or "cores") instead of only using one processor at a time. Fortunately, most modern operating systems and many programs provide support for multiprocessing. Some examples of quad-core CPUs include the Intel Core 2 Quad, Intel Nehalem, and AMD Phenom X4 processors. The Intel processors are used in Mac, Windows, and Linux systems, while the AMD processors are only used in Windows and Linux systems. While four cores may seems impressive, some high end computers have two quad-core CPUs, giving them a total of eight processing cores. Now that is some core power! Updated September 16, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/quadcore Copy

Quantization

Quantization Quantization is the process of mapping input to a fixed set of values. For example, an application may include a quantizing function that rounds a series of floating point numbers to the nearest integer. The result is a reduced data set that contains only integers. Quantizing data provides a discrete set of values that, while estimated, can be more easily processed. For example, rounding numbers to the nearest thousand may be useful when charting population data. Constraining RGB values to a limited range may provide color consistency when publishing a website. Audio Production Quantization is common in digital audio production, but it can refer to several different things. Examples include 1) pitch quantization, 2) sample quantization, and 3) MIDI quantization. 1. Pitch Quantization Some instruments, like the piano and harpsichord, have discrete notes. Other instruments, like the trombone and guitar, have an infinite range of pitches. Quantizing guitar input may help match the tone of a piano or other instruments within an audio mix. Auto-tune, a type of pitch quantization commonly applied to vocal recordings, provides customizable settings for modifying pitch. A gradual setting produces smooth, natural-sounding pitch adjustments. A faster setting results in a more robotic-sounding effect. 2. Sample Quantization Sampling is the process of converting an analog audio signal to a digital format. Because analog audio has an infinite range of frequencies, converting it to digital data requires quantizing the values to a fixed range. The sample rate determines the number of times the signal is sampled, or quantized, each second. Standard sample rates include 44.1 kHz and 96 kHz (96,000 times per second). 3. MIDI Quantization A MIDI recording does not contain audio data, but instead stores information about each note played during the recording process. It includes which note was pressed, when it was pressed, how hard it was pressed (the velocity), and when it was released. While a MIDI recording already contains discrete data, it is possible to quantize the timing of each note. For example, after recording a drum loop with a digital keyboard, it is common to quantize the data to the nearest 16th, 8th, or even quarter note. Shifting the bass and snare drum notes to exact timings creates a more consistent beat. NOTE: Quantizing audio and MIDI data can help clean up minor errors and inconsistencies in a recording. However, it can also make a recording sound artificial since human recordings are not perfect. Therefore, auto-tune is applied sparingly to professional recordings, and MIDI quantization is best-suited for electronic music. Updated October 2, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/quantization Copy

Query

Query Query is another word for question. In fact, outside of computing terminology, the words "query" and "question" can be used interchangeably. For example, if you need additional information from someone, you might say, "I have a query for you." In computing, queries are also used to retrieve information. However, computer queries are sent to a computer system and are processed by a software program rather than a person. One type of query, which many people perform multiple times a day, is a search query. Each time you search for something using a search engine, you perform a search query. When you press Enter, the keywords are sent to the search engine and are processed using an algorithm that retrieves related results from the search index. The results of your query appear on a search engine results page, or SERP. Another common type of query is a database query. Databases store data in a structured format, which can be accessed using queries. In fact, the structured query language (SQL) was designed specifically for this purpose. Users can create SQL queries that retrieve specific information from a database. For example, an human resources manager may perform a query on an employee database that selects all employees in a specific department that were hired between 11 and 12 months ago. The results might be used to provide the department head with current candidates for an annual review. While you may not always notice them, computer queries are happening all the time. For instance, most dynamic websites query a database each time you visit a new page. Software applications often contain background functions that perform queries based on your input. While many types of computer queries exist, their basic purpose is the same — to receive an answer to a question. NOTE: The word "query" can be used as either noun or a verb. For example, you can "perform a search query" or "query a database." Both examples are correct uses of the word "query." Updated May 13, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/query Copy

Queue

Queue A queue is a list of tasks waiting to be processed. After new tasks join a queue, they wait their turn as the system processes other tasks that are were there first or are of a higher priority. Computer hardware and software applications use queues to know which data to process or transmit next after completing the previous task. Most job queues execute tasks in the same order that they were added, known as First In, First Out (FIFO). However, some tasks benefit from other ordering methods. The Last In, First Out (LIFO) method prioritizes the most recently added task instead of the oldest one. Priority scheduling assigns each task a priority level using pre-determined criteria — for example, the task with the earliest deadline or the task that requires the least processing time — and processes the job with the highest priority first. Every operating system includes a software component, known as a task scheduler, which creates a queue of processes when several programs are running simultaneously. The task scheduler gives the process at the front of the queue CPU time to execute its instructions; when one task finishes, it is removed from the queue so that the next one in line has its turn. Some applications create their own task queues for processor-intensive tasks, which may allow the user to customize the order in which some jobs get carried out. For example, video encoding applications like HandBrake allow you to create a batch process queue that encodes one video at a time and automatically starts the next video after the previous encoding job finishes. A printer queue in macOS with two documents waiting for the current job to print In addition to computers, other devices also use queues to manage data. Network devices like switches and routers create queues when forwarding data packets across a network, ensuring packets arrive at the destination in the same order that they were transmitted. Printers also use queues to manage multiple print jobs, printing them in the order they were received. Most printers include software that allows users to manually sort, cancel, and add jobs in the queue. Updated August 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/queue Copy

QuickTime

QuickTime QuickTime is a multimedia framework developed by Apple. It supports playback of several common audio and video formats, including Apple's proprietary .MOV format. The QuickTime framework includes two fundamental aspects — QuickTime Player and QuickTime developer tools, also known as "QTKit." QuickTime Player is an application used for opening and playing videos and audio files. QTKit is an API that developers can use to process multimedia formats within a program. Examples include creating, saving, and converting media files. The API also provides developers with access to prebuilt media controls. QuickTime Kit was deprecated in 2013 with the release of OS X 10.9 Mavericks. Apple recommends developers use the newer AVFoundation and AVKit frameworks instead. History Apple released the first version of QuickTime in 1991. For over two decades, it was the standard media technology used in macOS, including OS X. During this time, Apple released several versions of QuickTime Player, through version 7.x. In 1998, Apple released a "Pro" version as a paid upgrade. QuickTime Player Pro provided more editing features and additional file format support than the free version. In 2009, Apple released Mac OS X 10.6 Snow Leopard, which included a new version of QuickTime called QuickTime X. This modernized version of QuickTime Player was bundled with macOS through version 10.14 Mojave, released in 2018. Apple did not release QuickTime X for Windows, but did offer several versions of QuickTime Player for Windows, through version 7.7.9. Apple discontinued support for the Windows version of QuickTime Player in early 2016. File extensions: .MOV, .QT, .MP4 Updated May 24, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/quicktime Copy

QWERTY

QWERTY QWERTY (pronounced "quirty") is an adjective used to describe standard Western (or Latin-based) keyboards. If you look at your keyboard, and the first six letters under the numbers are Q-W-E-R-T-Y, then you have a QWERTY keyboard. Nearly all keyboards used in the western hemisphere have a QWERTY layout. Some countries use slightly modified versions, such as the Swedish keyboard, which includes the letters Å, Ä, and Ö and the Spanish keyboard, which contains the letters Ñ and Ç. But these keyboards still have the QWERTY characters in the upper-left corner. History The original QWERTY keyboard layout was developed over 150 years ago by Christopher Latham Sholes. It was popularized by the Sholes and Glidden typewriter, which was initially produced in 1867. Remington bought the rights to the typewriter and made some slight changes before mass-producing an updated version in 1974. The goal of the QWERTY layout is to make the most common keys the most easily accessible (which is why Q is in the corner). By placing the vowels closely together, it also helped prevent typewriters from jamming when typing quickly. The only significant competitor to the QWERTY keyboard came 1932, when August Dvorak developed a new layout. His design placed all the vowels and the five most common consonants in the middle row. The goal was twofold: 1) to make the most common keys even easier to type and 2) to create an alternating rhythm between the left and right hands. While the Dvorak keyboard may have been technically more efficient, even in the early 1900s, people were not willing to learn a new keyboard layout. The result is that the QWERTY layout has survived for over one and a half centuries. It can be found in typewriters, desktop computers, laptops, and the touchscreen devices we use today. Updated May 30, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/qwerty Copy

Race Condition

Race Condition A race condition occurs when a software program depends on the timing of one or more processes to function correctly. If a thread runs or finishes at an unexpected time, it may cause unpredictable behavior, such as incorrect output or a program deadlock. Most software programs are multithreaded, meaning they can process several threads at once. A well-programmed application will ensure the results of each thread are processed in the expected order. If a program relies on threads that run in an unpredictable sequence, a race condition may occur. A simple example is a logic gate that handles boolean values. The AND logic gate has two inputs and one output. If inputs A and B are true, the AND gate produces TRUE. If one or both inputs are false, it produces FALSE. A race condition may happen if a program checks the logic gate result before variables A and B are loaded. The correct process would be: Load variable A Load variable B Check result of the AND logic gate An incorrect sequence would be: Load variable A Check result of the AND logic gate Load variable B The result of the second example above may or may not be the same as the first example. For instance, variable B may be FALSE before and after it is loaded, which would not change the result. If A is FALSE, it does not matter whether or not B is TRUE or FALSE. However, if both A and B are true, the result should be TRUE. Loading variable B after checking the result of the logic gate would produce an incorrect result of FALSE. The inconsistent output produced by race conditions may cause bugs that are difficult to detect. Programmers can avoid these issues by ensuring threads are processed in a consistent sequence. Updated January 24, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/race_condition Copy

RADCAB

RADCAB RADCAB is a mnemonic acronym that helps people evaluate information found online. It was created by Karen Christensson, a library media specialist, for helping students research topics on the Web. However, the acronym can be used by anyone who needs to look up information online. Each letter of RADCAB represents a means of evaluating information: Relevancy - Is the information relevant to my topic or research question? Appropriateness - Is the information appropriate to my age and values? Detail - Does the information contain enough detail for the subject I am researching? Currency - How recently was the information written or published? Authority - Who is the author and what are his or her credentials? Bias - Is the information written only to inform or is it meant to be persuasive? To learn more about RADCAB and its many uses, visit RADCAB.com. Updated October 28, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/radcab Copy

RAID

RAID Stands for "Redundant Array of Independent Disks." RAID is a method of seamlessly storing data across multiple physical disk drives. The disks in a RAID array appear to a computer's operating system as a single storage volume instead of multiple independent disks. In addition to providing a large amount of data storage, using a RAID array can improve disk performance and provide redundancy that helps prevent data loss in the event of a disk failure. RAID arrays spread data across multiple disks using a technique called "striping." Blocks of data, typically a few megabytes in size, are interleaved between drives equally, which spreads out I/O activity equally among all the disks. While RAID does not require all disks in an array to be the same size, using identical disks is more efficient since an array's total capacity is based on the size of the smallest drive. NAS devices and file servers often use RAID arrays to increase fault tolerance and merge multiple disks into a single volume. RAID 5 distributes parity data across every disk in the array RAID Levels There are multiple levels of RAID that provide different benefits. Some are focused on maximizing storage space, while others focus on redundancy and fault tolerance. While not an exclusive list, the most common levels are described below: RAID 0 stripes data evenly across multiple disks, providing a single large storage volume equal to the combined size of both. This level can increase read and write speeds by spreading I/O activity across multiple disks. It does not offer redundancy or data protection — if one disk fails, the entire array fails. RAID 1 mirrors the contents of one disk to another. It only provides the storage space of the smaller disk in the array, but since the second disk in the array is a complete copy of the first, it allows either disk to fail without data loss. RAID 1 requires a pair of disks. RAID 5 stripes data across multiple disks and equally distributes parity information. It requires a minimum of three disk drives and reserves the equivalent storage space of one disk for parity. For example, a RAID 5 array of four 10 TB hard drives provides 30 TB of disk space, with the remaining 10 TB used for parity. RAID 5 arrays can tolerate the failure of one disk without losing data. RAID 6 is effectively RAID 5 with double the parity; it requires a minimum of four disks and reserves the equivalent storage space of two disks for parity. For example, a RAID 6 array with four 10 TB hard drives provides 20 TB of storage space but can tolerate the failure of two disks without losing data. RAID 10 (or RAID 1+0) is known as nested RAID and combines RAID 1 and RAID 0. It combines two separate disks into a RAID 0 array and then mirrors that volume across two additional disks using RAID 1. It requires four disk drives and can tolerate the failure of one disk. NOTE: While most levels of RAID provide data redundancy that helps recover from a disk failure, RAID is not a backup solution. It cannot help you recover data lost for reasons other than hardware failures, like data corruption, malware, or accidental deletion. Updated August 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/raid Copy

RAM

RAM Stands for "Random Access Memory" and is pronounced "ram." RAM is a common hardware component found in electronic devices, including desktop computers, laptops, tablets, and smartphones. In computers, RAM may be installed as memory modules, such as DIMMs or (SO-DIMMs sodimm). In tablets and smartphones, RAM is typically integrated into the device and cannot be removed. The amount of RAM in a device determines how much memory the operating system and open applications can use. When a device has sufficient RAM, several programs can run simultaneously without any slowdown. When a device uses close to 100% of the available RAM, memory must be swapped between applications, which may cause a noticeable slowdown. Therefore, adding RAM or buying a device with more RAM is one of the best ways to improve performance. "RAM" and "memory" may be used interchangeably. For example, a computer with 16 GB of RAM has 16 gigabytes of memory. This is different than storage capacity, which refers to how much disk space the device's HDD or SSD provides for storing files. System memory is considered "volatile" memory since it only stores data while a device is turned on. When the device is powered down, data stored in the RAM is erased. When the device is restarted, the operating system and applications load fresh data into the system memory. This is why restarting a computer often fixes problems. Updated June 21, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ram Copy

Ransomware

Ransomware Ransomware is a type of malware that prevents you from using your computer or accessing certain files unless you pay a ransom. It often encrypts your files so that they cannot be opened. Examples of ransomware include Locky, Reveton, CryptoLocker, and CryptoWall. Ransomware is often distributed as a trojan, or malware disguised as a legitimate file. Once installed, it may lock your computer and display a "lockscreen" with a message saying you must pay a ransom to regain use of your computer. This may be a fake message purporting to be from a government institution like the FBI or Department of Defense saying you must pay a fine. It may also be a blatant ransom message saying your files are being held for ransom and you must pay to access them again. The ransom message typically includes instructions for how to pay the fine, often by credit card or Bitcoin. Ransom amounts range from less than $100 to several thousand dollars. Some ransomware may allow you to use your computer, but will prevent you from opening certain files. When you try to open a file or directory encrypted by the ransomware, you may see a message or alert box stating your files are being held for ransom and you must pay a fee to regain access to them. Dealing with Ransomware The best way to deal with ransomware is to prevent it. Don't open unknown files or downloads from untrusted websites. You may also want to install antivirus or Internet security software that can detect and eliminate ransomware threats before they take over your computer. This is especially true if you use Windows, as it is the platform most commonly targeted by ransomware. If your computer is infected with ransomware, you have a few options. If you have a recent system backup, you can revert to a saved state before the ransomware infected your computer. Search for an Internet security utility that can remove the specific ransomware installed on your system and possibly decrypt your files. (Not recommended) Pay the ransomware fee and contact your bank or credit card company to block or refund the transaction. NOTE: TechTerms does not recommend paying a ransom to remove ransomware. There is no guarantee that paying the fee will remove the ransomware for your computer. The best way to recover from a ransomware attack is to restore your files from a recent backup. Updated November 26, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ransomware Copy

Raster Graphic

Raster Graphic Raster graphics are computer graphics that consist of a grid of pixels, also known as a bitmap. Most images on the Internet and your computer are raster graphics. They are one of the two primary image types for computer graphics with vector graphics. The file size of a raster graphic depends on several variables. The first is a raster graphic's image size, also known as its resolution — the number of pixels it is wide by the number of pixels it is tall. A higher-resolution image has more pixels and thus requires more disk space to store. For example, an image 640 pixels wide by 480 pixels tall consists of 307,200 pixels total, while a larger 3072 x 2048 image requires 6,291,456 pixels. A raster graphic's file size also depends on its color depth — how many bits of data each pixel requires. An image with 8 bits of color depth can only display 256 colors. An image with 24 bits of color depth uses three times as much data but can use more than 16 million colors. Since raster graphics consist of a grid of pixels, resizing one to change its resolution will necessarily change the contents of those pixels. Shrinking one down requires the data from multiple pixels to be combined, causing fine details to be lost. Scaling one up requires creating new data between the existing pixels, which can result in an image that's either blurry or blocky. NOTE: There are many file formats used for raster graphics, some of which include image compression to reduce file size. For example, a large bitmap image saved in the lossy JPEG format will have a smaller file size than the same bitmap image saved as a losslessly-compressed PNG file, which in turn will be smaller than that image saved as an uncompressed BMP file. Updated January 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/raster_graphic Copy

Rational Number

Rational Number A rational number is any number that can be expressed as a ratio of two integers (hence the name "rational"). It can be written as a fraction in which the the top number (numerator) is divided by the bottom number (denominator). All integers are rational numbers since they can be divided by 1, which produces a ratio of two integers. Many floating point numbers are also rational numbers since they can be expressed as fractions. For example, 1.5 is rational since it can be written as 3/2, 6/4, 9/6 or another fraction or two integers. Pi (π) is irrational since it cannot be written as a fraction. A floating point number is rational if it meets one of the following criteria: it has a limited number of digits after the decimal point (e.g., 5.4321) it has an infinitely repeating number after the decimal point (e.g., 2.333333...) it has an infinitely repeating pattern of numbers after the decimal point (e.g. 3.151515...) If the numbers after the decimal point repeat infinitely with no pattern, the number is not rational or "irrational." Below are examples of rational and irrational numbers. 1 - rational 0.5 - rational 2.0 - rational √2 - irrational 3.14 - rational π (3.14159265359...) - irrational √4 - rational √5 - irrational 16/9 - rational 1,000,000.0000001 - rational In computer science, it is significant if a number is rational or irrational. A rational number can be stored as an exact numeric value, while an irrational number must be estimated. NOTE: The number zero (0) is a rational number because it can be written as 0/1, which equals 0. Updated June 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rational_number Copy

Raw Data

Raw Data Raw data is unprocessed, unformatted data that has been collected and saved on a computer. Raw data serves as input for data processing tasks that turn it into useful information. Raw data may be generated automatically by a program, or someone may enter it into a computer manually. Raw data files may consist of either plain text or binary data. For example, a router might keep a log of all network activity saved as raw data in a log file; a digital camera, on the other hand, saves raw image data into a binary Camera RAW file format. Regardless of the data type, raw data is often not very informative in its original state and depends on later processing and formatting. Processing and analyzing raw data to make it useful and informative results in "cooked" data Files containing raw data may be human-readable at first glance — files produced by a computerized weather sensor may show numerical readings for temperature, humidity, and wind speed taken at regular intervals. However, this is still raw data because it has not been processed. In this case, processing could mean importing it into a spreadsheet application where you can fix errors, convert units, and create charts of the readings over time. Once you have processed raw data and converted it into a usable format, it is instead referred to as "cooked" data. Updated August 30, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/raw_data Copy

RCS

RCS Stands for "Rich Communication Services." RCS is a communication technology for sending and receiving text messaging on smartphones. It is an extension of older SMS text messaging technology that increases message length and adds modern features, like delivery receipts and high-resolution media. Most mobile phone carriers and Android smartphones can use RCS messaging, but iPhones still lack support. Unlike SMS and MMS messaging, which use the cellular voice network to deliver messages, RCS uses a mobile phone's LTE or 5G Internet connection to send text and multimedia messages. It addresses several shortcomings of SMS and MMS messaging and adds new features common in proprietary messaging apps. The primary improvements made by RCS include: No limit on the length of a text message, instead of the SMS limit of 160 characters. High-resolution media sharing for images, audio, and video, unlike the highly-compressed low-resolution media sent over MMS. Delivery and Read receipts that indicate whether a message has been successfully delivered and when it has been read by the recipient. Live typing indicators inform you when someone in a thread is composing a message. Location sharing allows you to share a location on a map to help meet up or give directions. File sharing allows you to share documents and files with other participants in a message thread. Many of the features added by RCS are present in other mobile messaging apps like Apple's iMessage, WhatsApp, and Facebook Messenger. RCS is supported directly by the mobile carriers and uses your phone number for identification, so it does not require a separate account or app — if two people use an RCS-compatible messaging app to exchange text messages, they will automatically use RCS and can access its additional features; if only one participant supports RCS, the apps will fall back to SMS or MMS messages instead. Most mobile carriers and Android smartphones support RCS, but the lack of support on the iPhone creates a barrier to the universal adoption of RCS. Updated October 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rcs Copy

RDBMS

RDBMS Stands for "Relational Database Management System." An RDBMS is a DBMS designed specifically for relational databases. Therefore, RDBMSes are a subset of DBMSes. A relational database refers to a database that stores data in a structured format, using rows and columns. This makes it easy to locate and access specific values within the database. It is "relational" because the values within each table are related to each other. Tables may also be related to other tables. The relational structure makes it possible to run queries across multiple tables at once. While a relational database describes the type of database an RDMBS manages, the RDBMS refers to the database program itself. It is the software that executes queries on the data, including adding, updating, and searching for values. An RDBMS may also provide a visual representation of the data. For example, it may display data in a tables like a spreadsheet, allowing you to view and even edit individual values in the table. Some RDMBS programs allow you to create forms that can streamline entering, editing, and deleting data. Most well known DBMS applications fall into the RDBMS category. Examples include Oracle Database, MySQL, Microsoft SQL Server, and IBM DB2. Some of these programs support non-relational databases, but they are primarily used for relational database management. Examples of non-relational databases include Apache HBase, IBM Domino, and Oracle NoSQL Database. These type of databases are managed by other DMBS programs that support NoSQL, which do not fall into the RDBMS category. Updated December 16, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rdbms Copy

RDF

RDF Stands for "Resource Description Framework." RDF is a specification that defines how metadata, or descriptive information, should be formatted. The RDF model uses a subject-predicate-object format, which is a standardized way of describing something. For example, an RDF expression may read, "The computer has a hard drive that stores 250GB." "The computer" is the subject, "has a hard drive that stores" is the predicate, and "250GB" is the object. RDF formatting is used in RSS feeds, which contain short descriptions of Web pages. The RDF standard helps ensure each description contains the subject, predicate, and object necessary to describe the page's content. While humans do not require descriptions to be formatted in such a specific way (we would actually find it rather monotonous), computers benefit from the standard formatting. For example, it makes it easier for computer systems to sort and index RSS feeds based on the RDF descriptions. The end result is more accurate results when people search for articles using keywords. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rdf Copy

RDRAM

RDRAM Stands for "Rambus Dynamic Random Access Memory." RDRAM, developed by Rambus Inc., was a high-performance type of computer memory introduced in the late 1990s. Its high-speed bus offered significantly faster data transfer speeds compared to SDRAM (Synchronous DRAM). While typical SDRAM transferred data at speeds up to 133 MHz, RDRAM could reach speeds of over 1 GHz, providing a significant performance boost for memory-intensive tasks. RDRAM's speed advantage came with increased cost and complexity, limiting its widespread adoption as system memory. As a result, it was mostly used in high-performance applications like video memory for graphics cards, cache memory for CPUs, and system memory for workstations and servers. An advanced version called Direct Rambus DRAM (DRDRAM) further increased performance with data transfer rates up to 1.6 GHz. The fastest RDRAM memory module (designated as PC1600) supported a bandwidth of 6400 megabytes over two 16-bit busses). Despite its impressive speed, RDRAM was eventually overshadowed by more cost-effective technologies like DDR SDRAM, which became the industry standard. NOTE: DDR memory modules are often called DIMMS while Rambus modules are called RIMMs. Updated September 12, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rdram Copy

Read-only

Read-only A read-only file or storage device contains data that cannot be modified or deleted. While data can be accessed or "read" from a read-only file or device, new data cannot be added or "written" to the device. Most operating systems, such as Windows and OS X allow you to mark individual files as read-only. In Windows, you can right-click a file and select Properties to view more information about the file. You can click the Read-only checkbox (located in the General tab) to make a file read-only. In OS X, you can right-click a file and select Get Info to view more information about file. You can then select the Locked or Stationery options in the file's Get Info window to disable write access to the file. A "locked" file on a Mac is the same thing as a read-only file in Windows. Windows and OS X also allow you to set folders or entire storage devices to read-only. In some cases, the storage media itself may be read-only. For example, DVDs and CD-Rs are permanently read-only once the initial data has been written to the disc. Some flash drives and SD cards have a physical switch that can be used to make the media read-only. By setting a storage device to read-only, you can ensure that important data isn't accidentally modified or erased. NOTE: Read-only also refers to a type of memory called ROM that supports reading but not writing. Updated July 20, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/read-only Copy

Readme

Readme A Readme file (sometimes capitalized README) is a text file that provides some basic documentation for a software program. It is usually paired with, or installed by, a program's installer. Most Readme files are plain text, although some may include images or video. Readme files typically include a few categories of information. Most include instructions for configuring and using the program, especially if it requires steps that may not be intuitive. It will also display copyright information and developer credits, letting users know who was involved in the program's development. Comprehensive Readme files will also include troubleshooting steps to address common problems and a change log of recent updates, bug fixes, and patches. Now that nearly all software distribution happens over the Internet, the information covered in a Readme file is often also found on the program's website. This allows the program's developer to update information frequently and provide a change log that the user can see before updating. The developer can even create more comprehensive documentation with the assistance of the program's user base by using Wiki software. Updated November 15, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/readme Copy

Real Number

Real Number A real number is any positive or negative number. This includes all integers and all rational and irrational numbers. Rational numbers may be expressed as a fraction (such as 7/8) and irrational numbers may be expressed by an infinite decimal representation (3.1415926535...). Real numbers that include decimal points are also called floating point numbers, since the decimal "floats" between the digits. Real numbers are relevant to computing because computer calculations involve both integer and floating point calculations. Since integer calculations are generally more simple than floating point calculations, a computer's processor may use a different type of logic for performing integer operations than it does for floating point operations. The floating point operations may be performed by a separate part of the CPU called the floating point unit, or FPU. While computers can process all types of real numbers, irrational numbers (those with infinite decimal points) are generally estimated. For example, a program may limit all real numbers to a fixed number of decimal places. This helps save extra processing time, which would be required to calculate numbers with greater, but unnecessary accuracy. Updated May 14, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/realnumber Copy

Real-Time

Real-Time When an event or function is processed instantaneously, it is said to occur in real-time. To say something takes place in real-time is the same as saying it is happening "live" or "on-the-fly." For example, the graphics in a 3D action game are rendered in real-time by the computer's video card. This means the graphics are updated so quickly, there is no noticeable delay experienced by the user. While some computer systems may be capable of rendering more frames per second than other systems, the graphics are still being processed in real-time. While video games often require real-time rendering, not all graphics are rendered in real-time. For example, some complex 3D models and animations created for movies are not rendered in real-time, but instead are pre-rendered on a computer system so they can be played back in real-time. As graphics cards get increasingly faster, they are capable of rendering some 3D animations in real-time that previously would need to be pre-rendered. Real-time also describes the way streaming media is processed. Instead of waiting for a file to completely download, the information is played back as it is downloaded. This allows for news broadcasts, sound clips, and other streaming audio and video data to be played live from the Internet. Thanks to real-time processing, people can access information without having to wait for it. This is an important benefit since these days, anything that takes longer than 5 seconds seems like a long time. Updated January 8, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/realtime Copy

Reciprocal Link

Reciprocal Link A reciprocal link is a mutual link between two websites. For example, if website A links to website B, then website B can add a reciprocal link back to website A. The result of a reciprocal link is two websites that link to each each other. Reciprocal links are typically created for one of two purposes: 1) to establish a partnership between two websites, or 2) to boost search engine ranking. If two websites provide related information, the webmasters may decide it makes sense to link to each other. They can establish an online partnership by providing reciprocal links on their websites. Additionally, if a company or individual owns multiple sites, the webmaster may add reciprocal links to each site so that visitors are aware of the other sites. Reciprocal linking is also used to boost search engine ranking. Since search engine ranking algorithms factor in the number of incoming links, or "inlinks," a website has, reciprocal links can help increase a website's search engine ranking. However, since search engines also factor in the quality of each site providing an incoming link, not all reciprocal links are beneficial. NOTE: A reciprocal link may link to any page within a website, not just the page that contains the link back to the site. Updated August 4, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/reciprocal_link Copy

Record

Record A record is a database entry that may contain one or more values. Groups of records are stored in a table, which defines what types of data each record may contain. Databases may contain multiple tables which may each contain multiple records. Records are often called rows since each new record creates a new row in the table. Individual fields are sometimes called columns since they are the same for each record within a table. While the words "record" and "row" are often used interchangeably, most DBMSes use the word "row" for database queries and error messages. Therefore, if you see an error message on a website that says "Unable to jump to row 89," for example, it means the record was not found in the database. Records are an efficient way to store and access data. Since each record may contain multiple data types, a single record may include many different types of information. For example, a personnel record may contain an ID number, name, birthdate, and photo, which are all different data types. Individual fields within the personnel record can be easily accessed or compared with other records using a database query. Additionally, records can be easily created, modified, and deleted without affecting other data in the database. NOTE: The word "record" is also a verb (pronounced "re-cord"), which means to capture and save audio and/or video data. Updated August 25, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/record Copy

Recursion

Recursion Recursion is a process in computer programming in which a function calls on itself as a subroutine. The concept is helpful when addressing a problem that can be solved by breaking it up into smaller copies of the same problem. Every time a recursive function runs, it tells itself to run again, not stopping until it meets a specified condition. Functions that incorporate recursion are called recursive functions. A software developer building a recursive function needs to identify the base case. The base case is the part of the problem where the solution is known, so the problem is solved without more recursion. A recursive function iterates until it reaches the base case. At that point the recursion ends, and the program can move on to the next task. Properly-used recursion is an efficient programming method since it minimizes the amount of code needed to complete a task. However, if the base case is not set (or is not reachable), the recursion will continue indefinitely. Endless recursion is called an infinite loop and will eventually cause a program to crash. Updated October 31, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/recursion Copy

Recursive Function

Recursive Function A recursive function is a function that calls itself during its execution. The process may repeat several times, outputting the result and the end of each iteration. The function Count() below uses recursion to count from any number between 1 and 9, to the number 10. For example, Count(1) would return 2,3,4,5,6,7,8,9,10. Count(7) would return 8,9,10. The result could be used as a roundabout way to subtract the number from 10. function Count (integer N) if (N 9) return "Counting Completed"; else return Count (N+1); end function Recursive functions allow programmers to write efficient programs using a minimal amount of code. The downside is that they can cause infinite loops and other unexpected results if not written properly. For example, in the example above, the function is terminated if the number is 0 or less or greater than 9. If proper cases are not included in a recursive function to stop the execution, it will repeat forever, causing the program to crash or become unresponsive. Updated September 21, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/recursive_function Copy

Recycle Bin

Recycle Bin When you delete a file or folder in Windows, it is placed in the Recycle Bin. Items are temporarily stored in the Recycle Bin before they are permanently deleted by the user. The Recycle Bin is located on the Windows desktop. When it is empty, the icon is an empty recycle bin. If it contains one or more items, the icon changes to a recycle bin with papers in it. You can move items to the Recycle Bin by either dragging them to the Recycle Bin icon or by selecting the items and pressing the Delete key. You may also right-click an item and select "Delete" from the pop-up menu. You can open the Recycle Bin by double-clicking its icon. This allows you to view the files the Recycle Bin contains, just like a typical folder. However, in the left sidebar of the window, there is a "Recycle Bin Tasks" section that includes the options "Empty the Recycle Bin" and "Restore all items." Since Windows remembers the original location of each item, if you select "Restore all items," the files will each be placed back in their original location. You can also select items individually and restore them back to their previous folders. Emptying the Recycle Bin If you select "Empty the Recycle Bin," all the items in the Recycle Bin will be will permanently deleted. If you only want to delete a single item, you can select it, press Delete, then confirm you want to delete it. Deleted items cannot be restored, so you should only empty the Recycle Bin if you are sure you no longer need the files. It is a good idea to empty the Recycle Bin on a regular basis because it frees up disk space for other files. Recycle or Trash? The Recycle Bin serves the same purpose as the Trash found on Macintosh computers. While the name is not as simple, it is admittedly, more eco-friendly. It also reflects the idea that when you empty the Recycle Bin, it makes more disk space available, thereby "recycling" the sections of the disk that contained the previous data. Updated August 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/recyclebin Copy

Redundancy

Redundancy The general definition of redundancy is exceeding what is normal. However, in computing, the term is used more specifically and refers to duplicate devices that are used for backup purposes. The goal of redundancy is to prevent or recover from the failure of a specific component or system. There are many types of redundant devices. The most common in personal computing is a backup storage device. While most other computer components can be easily replaced, if a hard drive fails, it may not be possible to recover personal data. Therefore, it is important to regularly back up your data to a secondary hard drive. In enterprise situations, a RAID configuration can be used to mirror data across two drives in real-time. Another type of redundant device is a secondary power supply. High traffic web servers and other critical systems may have multiple power supplies that take over in case the primary one fails. While an uninterruptible power supply (UPS) is not technically a redundant device, the battery within the surge protector provides power redundancy for a few minutes if electricity is lost. Computer networks often implement redundancy as well. From local area networks to Internet backbone connections, it is common to have redundant data paths. This means if one system goes down, the connection between other systems will not be broken. For example, an FDDI network has a duplicate data "ring" that is used automatically when the primary data path is interrupted. Network redundancy can be accomplished by either adding extra physical connections or using networking software that automatically reroutes data when needed. Updated November 23, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/redundancy Copy

Refresh

Refresh Refresh is a command that reloads the contents of a window or Web page with the most current data. For example, a window may list files stored within a folder, but may not track their location in real-time. If the files have been moved or deleted since the window was first opened, the folder contents displayed will be inaccurate. By refreshing the window, a current list of files is displayed. Web browsers include a Refresh command, which reloads the contents of a Web page. This is especially useful for dynamic Web pages, which contain content that changes often. For example, a page may include a stock quote, which is updated every few seconds. By refreshing the page, a user can see the latest quote and track how much the stock continues to drop since he bought it. Web developers may also use the Refresh command to view recently published changes to Web pages. Since refreshing a window reloads it with new information, the terms "refresh" and "reload" are often used synonymously. In fact, some Web browsers, such as Firefox and Safari use the term "Reload" instead of "Refresh." In Windows, the shortcut key for the Refresh command is typically "F5," while on the Mac, the shortcut is often "Command-R." The term "Refresh" may also refer to the redrawing process of a computer monitor. This process usually happens many times per second and is called the "refresh rate." Updated March 19, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/refresh Copy

Refresh Rate

Refresh Rate Computer monitors often have a "maximum refresh rate" listed in their technical specifications. This number, measured in hertz (Hz), determines how many times the screen is redrawn each second. Typical refresh rates for CRT monitors include 60, 75, and 85 Hz. Some monitors support refresh rates of over 100 Hz. The higher the refresh rate, the less image flicker you will notice on the screen. Typically a refresh rate of less than 60 Hz will produce noticeable flicker, meaning you can tell the screen is being redrawn instead of seeing a constant image. If the refresh rate is too slow, this flicker can be hard on your eyes and may cause them to tire quickly. As if sitting at a computer for several hours wasn't hard enough! To avoid flicker, you should set your monitor to use the maximum refresh rate possible. This setting is found in the Monitors control panel in Windows and the Displays system preference in Mac OS X. While 60 Hz is considered a good refresh rate, some people will find that 85 Hz is significantly better. The maximum refresh rate is determined by three factors: 1) The rate your video card supports, 2) the rate your monitor supports, and 3) the resolution your monitor is set at. Lower resolutions (i.e. 800x600) typically support higher refresh rates than higher resolutions (i.e. 1600x1200). If you have an LCD monitor, you may not be able to adjust the refresh rate. This is because most LCD monitors come with a standard refresh rate that is well above the "flicker" point. LCD monitors produce less flicker than CRT monitors because the pixels on an LCD screen stay lit longer than CRT monitors before they noticeably fade. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/refreshrate Copy

ReFS

ReFS Stands for "Resilient File System." ReFS is a file system developed by Microsoft. It was first introduced with Windows Server 2012 and is primarily used by file servers and data centers. Many of its features focus on protecting a disk's contents from data corruption while minimizing downtime. It also includes features that speed up the performance of virtual machines and multi-drive storage pools. The benefits of using ReFS over NTFS or other file systems focus on resiliency against corruption, hence the name "Resilient File System." It includes several features that actively monitor files to detect and repair data corruption even while the disk is in use. ReFS creates checksums for file metadata (and optionally for file data itself) that can indicate when data is corrupted. When used in a Storage Spaces volume (a Windows feature that creates a multi-disk storage pool similar to a RAID array), it automatically uses its mirror and parity volumes to spot when one copy of a file no longer matches the others. It can then replace the corrupt copy with a fixed version. Despite its improvements over NTFS, ReFS is not yet suitable as a computer's default file system. Many of its data resilience features require using Storage Spaces, which benefits large storage volumes for archiving data while providing no benefit for the operating system and installed applications. These data resilience features also come at a performance cost, so ReFS operates slower than NTFS. It also lacks several features from NTFS, including full-disk compression, extended attributes, and disk quotas. More importantly, ReFS does not yet support bootable or removable storage devices, so you cannot use it on your boot disk or any external drives. ReFS is supported only by Windows (8.1 and later) and Windows Server (2012 and later). Linux, Unix, and macOS cannot mount a ReFS volume directly and must access a ReFS volume's files through a Windows file server over a network. Updated September 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/refs Copy

Register

Register A register is a temporary storage area built into a CPU. Some registers are used internally and cannot be accessed outside the processor, while others are user-accessible. Most modern CPU architectures include both types of registers. Internal registers include the instruction register (IR), memory buffer register (MBR), memory data register (MDR), and memory address register (MAR). The instruction register fetches instructions from the program counter (PC) and holds each instruction as it is executed by the processor. The memory registers are used to pass data from memory to the processor. The storage time of internal registers is extremely temporary, as they often hold data for less than a millisecond. User-accessible registers are larger than internal registers and typically hold data for a longer time. For example, a data register may store individual values referenced being by a currently running program. An address register contains memory addresses, which reference different blocks of memory within the system RAM. Many CPUs now have general purpose registers (GPRs), which may contain both data and memory addresses. Registers vary in both number and size, depending on the CPU architecture. Some processors have 8 registers while others have 16, 32, or more. For many years, registers were 32-bit, but now many are 64-bit in size. A 64-bit register is necessary for a 64-bit processor, since it enables the CPU to access 64-bit memory addresses. A 64-bit register can also store 64-bit instructions, which cannot be loaded into a 32-bit register. Therefore, most programs written for 32-bit processors can run on 64-bit computers, while 64-bit programs are not backwards compatible with 32-bit machines. Updated January 30, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/register Copy

Registry

Registry The Windows Registry is a database of settings used by Microsoft Windows. It stores configurations for hardware devices, installed applications, and the Windows operating system. The Registry provides a centralized method of storing custom preferences for each Windows user, rather than storing them as individual .INI files. The Windows Registry is structured as a hierarchy that has several top-level categories, also known as "hives." Each begins with "HKEY," short for "Handle to Registry Key." Some versions of Windows have as many as seven top-level categories, while Windows 10 has five. These include: HKEY_CLASSES_ROOT - stores file associations, which links file extensions to default programs - stores configurations for COM (Component Object Model) objects, such as Microsoft OLE documents and ActiveX components HKEY_CURRENT_USER - stores user-level preferences selected using the Control Panel - stores user-level network and printer settings HKEY_LOCAL_MACHINE - stores system and application settings that apply to all users - stores system-level network and printer settings HKEY_USERS - stores settings temporarily for active users (those who are currently logged in) HKEY_CURRENT_CONFIG - stores differences between the standard configuration and the current hardware configuration The Windows operating system and installed programs can modify the Registry directly. For example, changing the default program for a specific file extension will update the Registry automatically. Therefore, it is typically not necessary to edit the Windows Registry. However, power users may wish to view and manually override some Registry settings. Windows provides a Registry Editor utility (regedit.exe) for viewing and editing Registry data. This utility can also import settings and export existing configurations. NOTE: You should not edit the Windows Registry unless you understand what you are doing. Incorrect Registry settings may cause problems with installed applications or the operating system itself. Updated July 12, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/registry Copy

Regular Expression

Regular Expression A regular expression (or "regex") is a search pattern used for matching one or more characters within a string. It can match specific characters, wildcards, and ranges of characters. Regular expressions were originally used by Unix utilities, such as vi and grep. However, they are now supported by many code editing applications and word processors on multiple platforms. Regular expressions can also be used in most major programming languages. A regular expression can be as simple as a basic string, such as "app". The regex "app" would match strings containing the words "apps," "applications," and "inapplicable". A regular expression may also contain anchor characters ("^" and "$") which are used to specify the beginning and end of a line, respectively. Therefore, the regex "^apps" would match the string, "apps are great," but would not match the string, "I like apps." Regular expressions can include dashes, which are used to match a range of characters, such as all lowercase letters. For example, the regex "[a-z]" would match "apps," but would not match the strings "Apps" or "123". The regex "[A-Za-z]" would match "Apps" and "[0-9]" would match "123". A period, which is the standard wildcard character in regular expressions, can be used to match any character (except an end-of-line character). A period followed by an asterisk (.*) matches zero or more instances, while a period followed by a plus (.+) matches one or more instances. So what happens if you need to match a string containing a dash, asterisk, plus, or an anchor character? These characters can be included in a regular expression pattern by "escaping" them with a backslash (""). For example

Reimage

Reimage When you reimage a hard disk, you restore the entire disk from a disk image file. Since this restore process involves erasing all the current data on the hard disk, it is typically used a last resort system recovery option. The disk image used to restore a computer may be a recent backup from the computer's hard drive or it may be disk image with completely new data. Most home users would reimage a hard disk from a personal backup file, such as a Norton Ghost disk image (Windows) or a Time Machine backup (Mac). Reimaging from a backup file allows you to recover all the data that was saved at the time of the backup. System administrators, on the other hand, may reimage business computers using standard disk images instead of backup files. This ensures all computers within a department have the same data. Reimaging is a useful system restore method, since it rebuilds a hard drive with the exact content saved in the disk image. This includes all the folders and files as well as the hard disk file system information. Therefore, reimaging is one-step process, which doesn't require multiple installations. While the term "reimage" is sometimes used synonymously with an OS reload (or operating system reinstallation), it technically refers only to the process of restoring data from a disk image. Reimage is also the name of a Windows PC hard disk repair utility. Updated January 19, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/reimage Copy

Relational Database

Relational Database A relational database is a database model that stores data in tables. The vast majority of databases used in modern applications are relational, so the terms "database" and "relational database" are often used synonymously. Likewise, most database management systems (DBMSes) are relational database management systems (RDBMSes). Other database models include flat file and hierarchical databases, though these are rarely used. Each table in a relational database contains rows (records) and columns (fields). In computer science terminology, rows are sometimes called "tuples," columns may be referred to as "attributes," and the tables themselves may be called "relations." A table can be visualized as a matrix of rows and columns, where each intersection of a row and column contains a specific value. It is "relational" since all records share the same fields. Database tables often include a primary key, which provides a unique identifier for each row within the table. The key may be assigned to a column (which requires a unique value each row), or it may be comprised of multiple columns that together form a unique combination of values. Either way, a primary key provides an efficient way of indexing data and can be used to share values between tables within a database. For example, the value of a primary key from one table can be assigned to a field in a row of another table. Values imported from other tables are called foreign keys. The standard way to access data from a relational database is through an SQL (Structured Query Language) query. SQL queries can be used to create, modify, and delete tables, as well as select, insert, and delete data from existing tables. Updated July 16, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/relational_database Copy

Remote Access

Remote Access Remote access refers to connecting to and controlling a computer over a network connection. Remote users often have the same control over the computer as they would if they had physical access to it. It allows a website administrator to remotely manage a web server at a data center, or a person working from home to access and control their computer at the office. A computer set up to allow remote access can provide a text-based command line or a full graphical user interface, depending on which protocol the connection uses — SSH provides a text-based command line, while VNC and Microsoft's proprietary RDP protocol provide a GUI. All methods require that both computers run compatible remote access software — server software on one and a client app on the other. Some operating systems include remote access server software built-in, including Professional (but not Home) editions of Windows, macOS, and most Linux distributions. Third-party applications are also available that offer more features than built-in tools. Remote access over a command line and a GUI Connections between computers are encrypted to prevent other people and devices from eavesdropping a remote access session. SSH and RDP both encrypt connections by default, but VNC does not; instead, VNC sessions are often tunneled through an encrypted SSH connection. Remote access sessions may also use a VPN for additional security, and in many cases (like connecting to a computer on a corporate network), a VPN is required to ensure a secure connection. Updated November 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/remote_access Copy

Remote Desktop

Remote Desktop Remote desktop is an operating system or software feature which allows a user to log into and control a computer remotely using its graphical user interface. Remote users can access files and run programs on a remote computer as if they were sitting at its keyboard and mouse. Windows, macOS, Unix, and Linux all support remote desktop connections, although they use different protocols and implementations. Professional and Server versions of Windows include a built-in remote desktop server that uses Microsoft's proprietary Remote Desktop Protocol (RDP). If remote desktop is enabled, authorized users can use the Remote Desktop Connection application (or another RDP-compatible client) to connect to a computer over a local network; remote connections over the Internet are possible through port forwarding or a VPN. RDP uses TLS encryption to prevent eavesdropping. While the Home versions of Windows include an RDP client, they do not include an RDP server. To remotely connect to a computer running a Home version of Windows, you will need to install server software separately. MacOS supports remote desktop connections through Screen Sharing, which enables remote control of a computer's desktop and GUI. Screen Sharing uses an open technology called Virtual Network Computing (VNC) to control the remote computer. You can use the built-in Screen Sharing app or any other VNC-compatible client to remotely control a Mac. You can also use Apple's higher-end Apple Remote Desktop app for additional remote management features like remote software installation, system shutdown, and automation tasks. Unix and Linux support remote desktop connections using both VNC and RDP if the proper server software is present. While most remote administration tasks in Unix and Linux are possible through a text-based terminal connection, a graphical remote connection can make things easier if you don't know the necessary terminal commands. Since it is common to operate a Unix and Linux server without a monitor, using a remote desktop is often the only way to get to a desktop environment. NOTE: While RDP uses encryption to protect traffic between the remote computer and the client, VNC does not. However, many VNC clients support tunneling through SSH to provide a secure and encrypted remote desktop connection. Updated June 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/remote_desktop Copy

Remote User

Remote User A remote user is a person that connects to one computer from a different computer. Remote users may connect to another computer over a local network, a virtual private network, or the Internet. Accessing a computer as a remote user requires a user account on the destination computer and remote connection software — either a graphical remote desktop application or a text-based terminal. Remote users connect to a computer for a variety of tasks. For example, some people may occasionally connect remotely to a work computer from home for a few minutes to access a single file. Other remote users may remotely connect to a computer to configure and maintain server software on it daily. Technical support staff may even remotely connect to another computer at their location to troubleshoot problems. Windows, macOS, and Unix-like operating systems all include support for remote users. Since Windows primarily uses a graphical user interface, most remote users connect using the Remote Desktop Connection app. Unix and Unix-like operating systems, including macOS, have better support for a command-line-only interface, allowing remote users to connect through a text-based terminal using the SSH protocol. Updated February 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/remoteuser Copy

Rendering

Rendering Rendering is the process of generating a final digital product from a specific type of input. The term usually applies to graphics and video, but it can refer to audio as well. 1. Graphics 3D graphics are rendered from basic three-dimensional models called wireframes. A wireframe defines the shape of the model, but nothing else. The rendering process adds surfaces, textures, and lighting to the model, giving it a realistic appearance. For example, a 3D drawing application or a CAD program may allow you to add different colors, textures, and lighting sources to a 3D model. The rendering process applies these settings to the object. Thanks to the power of modern GPUs, 3D image rendering is often done in real-time. However, with high-resolution models, surfaces and lighting effects may need to be rendered using a specific "Render" command. For example, a CAD program may display low-resolution models while you are editing a scene, but provide an option to render a detailed model that you can export. 2. Video 3D animations and other types of video that contain CGI often need to be rendered before viewing the final product. This includes the rendering of both 3D models and video effects, such as filters and transitions. Video clips typically contain 24 to 60 frames per second (fps), and each frame must be rendered before or during the export process. High-resolution videos or movies can take several minutes or even several hours to render. The rendering time depends on several factors including the resolution, frame rate, length of the video, and processing power. While video clips often need to be pre-rendered, modern GPUs are capable of rendering many types of 3D graphics in real-time. For example, it is common for computers to render high-definition video game graphics at over 60 fps. Depending on the graphics power, a game's frame rate may be faster or slower. If the GPU cannot render at least 30 frames per second, the video game may appear choppy. 3. Audio Like video effects, audio effects can also be rendered. For example, a DAW application may include effects like reverb, chorus, and auto-tune. The CPU may be able to render these effects in real-time, but if too many tracks with multiple effects are being played back at once, the computer may not be able to render the effects in real-time. If this happens, the effects can be pre-rendered, or applied to the original audio track. All effects are rendered when the final mix is exported or "bounced" as an audio file. Updated June 3, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rendering Copy

Repeater

Repeater A repeater is an electronic device that relays a transmitted signal. It receives a signal on a specific frequency, then amplifies and rebroadcasts it. By amplifying the signal, a repeater increases the transmission range of the original signal. Repeaters have many applications, but in computing they are most commonly used in wireless networks. For example, a Wi-Fi network in a large home may benefit from using one or more repeaters to relay the signal to different areas of the house. Homes that have brick walls or cement floors may also benefit from having a repeater relay the signal around the obstacle. Businesses often use a series of repeaters to create a single wireless network within a large building. While repeaters all serve the same purpose, they come in many forms. Some wireless devices, often called "range extenders" are designed to be used specifically as repeaters. Other devices, such as hubs, switches, and routers can all be configured as repeaters using a software utility or web interface that controls the wireless device. NOTE: Since repeaters only relay an incoming signal, using a router as a repeater does not make use of its signal routing capability. Therefore, it makes more sense to use a range extender as a repeater if possible. Updated November 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/repeater Copy

Repository

Repository In software development, a repository is a central file storage location. It is used by version control systems to store multiple versions of files. While a repository can be configured on a local machine for a single user, it is often stored on a server, which can be accessed by multiple users. A repository contains three primary elements — a trunk, branches, and tags. The trunk contains the current version of a software project. This may include multiple source code files, as well as other resources used by the program. Branches are used to store new versions of the program. A developer may create a new branch whenever he makes substantial revisions to the program. If a branch contains unwanted changes, it can be discontinued. Otherwise, it can be merged back into the trunk as the latest version. Tags are used to save versions of a project, but are not meant for active development. For example, a developer may create a "release tag" each time a new version of the software is released. A repository provides a structured way for programmers to store development files. This can be helpful for any type of software development, but it is especially important for large development projects. By committing changes to a repository, developers can quickly revert to a previous version of a program if a recent update causes bugs or other problems. Many version control systems even support side-by-side comparisons of different versions of files saved in the repository, which can be helpful for debugging source code. Additionally, when a repository is stored on a server, users can "check out" files for editing, which prevents files from being edited by more than one user at a time. Updated August 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/repository Copy

Resistor

Resistor A resistor is an electrical component that limits the flow of electric current. One or more resistors can be used to provide the correct amount of current to specific components within an electronic device. Resistors are often soldered onto a printed circuit board to limit the amount of current that flows to different electrical paths. If too little current reaches a component, it may not operate. If too much current is allowed through, it can damage the component. Therefore, resistors play an important role in an electronic circuit. Several types of resistors exist, but most are made up of carbon and an insulating material, such as ceramic. The current flows in one end and the remaining current flows out the other. The resulting current is inversely proportional to the resistance. This is defined in Ohm's law, which states that the current (I) is equal to the voltage (V) divided by the resistance (R). I = V / R Resistors are often color-coded to visually represent their resistance levels. A typical axial-lead resistor, for instance, is cylindrical in shape and has several colored stripes. The first few stripes represent digits, followed by a stripe which represents a multiplier (10x, 100x, etc.) On the other end is a stripe that represents the tolerance, which defines the accuracy of the resistor. Some resistors also include one more band that represents the temperature coefficient. NOTE: Resistors are represented in circuit diagrams by a jagged line. They are usually labeled R1, R2, R3, etc. Updated November 9, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/resistor Copy

Resolution

Resolution Resolution measures the number of pixels in a digital image or display. It is defined as width by height, or W x H, where W is the number of horizontal pixels and H is the number of vertical pixels. For example, the resolution of an HDTV is 1920 x 1080. Image Resolution A digital photo that is 3,088 pixels wide by 2,320 pixels tall has a resolution of 3088 × 2320. Multiplying these numbers together produces 7,164,160 total pixels. Since the photo contains just over seven million pixels, it is considered a "7 megapixel" image. Digital camera resolution is often measured in megapixels, which is simply another way to express the image resolution. "Resolution" is often used synonymously with "size" when describing the dimensions of a digital image. However, the term "size" can be a bit ambiguous, since it may refer to the file size of the image or its dimensions. Therefore, it is best to use "resolution" when describing the dimensions of a digital image. Display Resolution Every monitor and screen has a specific resolution. A "Full HD" display has a resolution of 1920 x 1080 pixels. A 4K display has twice the resolution of Full HD, or 3840 x 2160 pixels. It is called "4K" since the screen is nearly 4,000 pixels across horizontally. The total number of pixels in a 4K display is 8,294,400, or just over eight megapixels. Monitor resolution defines how many pixels a screen can display, but it does not describe how fine the image is. For example, a 27" iMac 5K display has a resolution of 5120 x 2880 while an older 27" Apple Thunderbolt Display has exactly half the resolution of 2560 x 1440. Since the 27" iMac is the same physical size as the Thunderbolt Display but has twice the resolution, it has twice the pixel density, measured in pixels per inch, or PPI. NOTE: Unlike monitor resolution, the resolution of a printer or scanner is analogous to pixel density rather than total pixels. Printer and scanner resolution is measured in dots per inch, or DPI. Updated August 26, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/resolution Copy

Responsive Web Design

Responsive Web Design Responsive web design (or "RWD") is a type of web design that provides a customized viewing experience for different browser platforms. A website created with RWD will display a different interface depending on what device is used to access the site. For example, a responsive website may appear one way on a laptop, another way on a tablet, and still another way on smartphone. Today, many people access websites from mobile devices, rather than desktop computers or laptops. While most smartphones can display regular websites, the content is difficult to read and even harder to navigate. Therefore, many web developers now use responsive web design to provide a better web browsing experience on small screens. Websites designed for mobile devices (such as iOS and Android phones), typically have a larger default font size and simplified navigation. Some sites also take advantage of the touchscreens interface, providing support for swiping, rotating, and pinching and zooming. A mobile-optimized website may also remove unnecessary content to reduce the need for scrolling. Image rollovers and other mouse hover features are often removed, since touchscreens don't support a mouse pointer. There are several ways to implement responsive web design. One method is to dynamically detect the user's platform (such as an iPad or iPhone) and load specific HTML and CSS for the corresponding device. Another option is to use media queries, which automatically load different CSS styles depending on the size of the browser window. The Bootstrap package, which contains several prewritten JavaScript and CSS files, is based on media queries and is commonly used in responsive web design. Updated August 20, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/responsive_web_design Copy

REST

REST Stands for "Representational State Transfer." REST is a defined set of standards for transferring resources over the web. The REST method is commonly applied to APIs ("REST APIs"), which allow developers to access data from remote systems over the Internet. REST defines six architectural constraints: Client-server model – a server processes client requests, and clients can only access required data from the server Stateless – requests are transient and no client data is stored on the server Cachable – requests can be cached to improve performance Uniform interface – access to data must be consistent for all users and systems Layered system support – requests can be processed hierarchically and routed across multiple systems Code on demand (optional) – servers can deliver executable code to client systems REST vs. SOAP REST is often contrasted with SOAP, another set of standards for transferring data over the Internet. However, REST is simply a design style, while SOAP is an actual protocol with specific rules and commands. Both REST and SOAP APIs transmit data over HTTP, but SOAP is limited to XML data, while REST is more flexible. Because SOAP is an official protocol, all messages sent via SOAP must use the same "envelope" format, with a header and body. REST, which came after SOAP, has fewer constraints and is often considered a lightweight alternative to SOAP. NOTE: APIs that conform to REST guidelines are technically called "RESTful APIs," but many developers refer to them as "REST APIs" instead. Updated July 15, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rest Copy

Restore

Restore The word "restore" means to return something to its former condition. Therefore, when you restore a computer or other electronic device, you return it to a previous state. This may be a previous system backup or the original factory settings. Restoring a computer, often called a "system restore," is often done as a last resort to fix a problematic machine. For example, if a virus has infected a large number of system files, a system restore may be the only way to get the computer to run smoothly again. If a hard drive has become corrupted and cannot be repaired using a disk utility, it may be necessary to reformat the hard drive and restore the computer from the original operating system installation discs. System restores may also be performed to wipe all data from a computer before selling it or transferring it to another owner. Since restoring a computer returns it to a previous state, the process also erases any new data that has been added since the previous state. Therefore, you should always back up your data before restoring a computer. The backup should be saved to an external hard drive or another disk other than the one being restored. Once you have restored the computer, you may transfer your files from the backup device back to the computer. While the term "restore" is often associated with computers, other devices can be restored as well. For example, if a smartphone is repeatedly malfunctioning, it can often be fixed by restoring the system software. Most video game consoles can also be restored by running the restore operation within the system settings. Both iPods and iPhones can be restored using Apple's iTunes software. No matter what device you choose to restore, remember that the process will erase data saved on the device. Therefore, the first step in restoring a computer or any other device should be to back up your personal data. Updated December 10, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/restore Copy

Retina Display

Retina Display The term "retina display" is a hardware term coined by Apple in June, 2010. It describes a display that has a resolution of over 300 dpi. The iPhone 4, which was also announced in June, 2010, has a screen resolution of 326 dpi and was the first Apple product to include a retina display. The name "retina display" refers to way the high-resolution display appears to the human eye. When a display has a resolution over 300 dpi, most humans cannot recognize individual pixels when viewing the screen from a distance of about 12 inches. Therefore, the pixels seem to run together, creating a smooth appearance. This is similar to digital audio that is recorded with a high sampling rate. Since the audio samples are so close together, we perceive the sound as a smooth analog signal. Since some people have better vision than others, there is no scientifically accurate number that defines a retina display. In fact, some people may in fact be able to identify individual pixels in a retina display. Still, compared to a typical computer monitors, which has a resolution of 72 dpi, a retina display will look noticeably sharper to all users. Retina displays are especially useful for reading text on a small screen, such as an iPhone or iPod Touch. The increased resolution makes small text legible and medium-sized text easier to read. As display technology continues to evolve, retina displays are expected to be made available in larger devices, such as the iPad and HiDPI monitors. Updated December 16, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/retina_display Copy

RFID

RFID Stands for "Radio-Frequency Identification." RFID is a system used to track objects, people, or animals using tags that respond to radio waves. RFID tags are integrated circuits that include a small antenna. The are typically small enough that they are not easily noticeable and therefore can be placed on many types of objects. Like UPC labels, RFID tags are often used to uniquely identify the object they are attached to. However, unlike UPCs, RFID tags don't need to be scanned directly with a laser scanner. Instead, they can be recorded by simply placing the tag within the range of an RFID radio transmitter. This makes it possible to quickly scan several items or to locate a specific product surrounded by many other items. RFID tags have many different uses. Some examples include: Merchandise tags - These tags are attached to clothing, electronics, and other products to prevent theft from retail stores. These tags are typically deactivated at the place of checkout. Tags that have not been deactivated will sound the alarm system near the store's exit. Inventory management - Products stored in warehouses may be given RFID tags so they can be located more easily. Airplane luggage - RFID tags may be placed on checked bags so they can be easily tracked and located. Toll booth passes - E-ZPass and I-Pass receivers may be placed in automobiles, allowing cars and trucks to pass through toll booths without needing to stop. This enables drivers to make toll payments automatically. Credit cards - Some credit cards have built-in RFIDs so they can be "waved" rather than "swiped" near compatible readers. The SpeedPass wand is an example of an RFID-only payment device. Animal tags - RFID tags can be placed pet collars to make help identify pets if they are lost. Tags may also be placed on birds and other animals to help track them for research purposes. The above list includes just a few of the applications of radio-frequency identification. There are many other existing and potential applications for RFID tags as well. Updated September 4, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rfid Copy

RGB

RGB Stands for "Red Green Blue." RGB refers to three hues of light that can be mixed together to create different colors. Combining red, green, and blue light is the standard method of producing color images on screens, such as TVs, computer monitors, and smartphone screens. The RGB color model is an "additive" model. When 100% of each color is mixed together, it creates white light. When 0% of each color is combined, no light is generated, creating black. It is sometimes contrasted with CMYK (cyan, yellow, magenta, and black), the standard color palette used to create printed images. CMYK is a "subtractive" color model since the colors get darker as they are combined. Blending 100% of each color in the CMYK color model produces black, while 0% of each color results in white. How many colors can be created using RGB? The number of colors supported by RGB depends on how many possible values can be used for red, green, and blue. This is known as color depth and is measured in bits. The most common color depth is 24-bit color, also known as "true color." It supports eight bits for each of the three colors, or 24 bits total. This provides 28, or 256 possible values for red, green, and blue. 256 x 256 x 256 = 16,777,216 total possible colors in the "true color" palette. The human eye can only distinguish about seven million colors, so color depths above 24-bit are rarely used. RGB Example When displaying a color image on a screen, each pixel has a specific RGB value. In 24-bit color, this value is between 0 and 255, where 0 is no color and 255 is full saturation). A purple pixel will have a lot of red and blue, but little to no green. For example, the following RGB value might be used to create purple: R: 132   (84 in hexadecimal) G: 17   (11 in hexadecimal) B: 170   (AA in hexadecimal) Since each color has 256 possible values, it can also be represented using two hexadecimal values (16 x 16), as shown above. The standard way to display an RGB value is to use the hexadecimal values for red, green, and blue, preceded by a number symbol, or hashtag. Therefore, the purple color above is defined in RGB as #8411AA. Updated May 17, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rgb Copy

RGBA

RGBA Stands for "Red Green Blue Alpha." RGBA is a color model based on the RGB model. It mixes red, green, and blue light to create any color in the spectrum, then adds a fourth value, alpha, for transparency. It is a standard color model often used for image editing and web graphics that require gradual levels of transparency. The RGBA color model, like RGB, is additive. When the three color channels are at their maximum values, they produce white light; at their minimum values, they create black. The alpha channel adds a separate value for transparency, ranging from 0% (completely transparent) to 100% (completely opaque). A transparency value in between results in a pixel that lets some background color show through. For example, an image with a drop shadow uses a gradient of increasingly-transparent pixels to create a shadow that darkens the background behind it. An RGBA image over a checkerboard background, alpha at 100% on the left and 0% on the right The most common color depth for RGBA images is 32 bits per pixel: 8 bits each for red, green, blue, and alpha. Each 8-bit color channel can contain a value between 0 and 255; this produces an image with the same number of colors as a 24-bit True Color RGB image (256 x 256 x 256 = 16,777,216 possible colors) with an additional 256 levels of transparency possible for each pixel. However, while image editors display each RGB value as a range between 0 and 255, transparency values are instead shown as a range between 0 and 100%. RGBA Example CSS uses RGBA values as one possible method of setting the color of text and graphics. Red, green, and blue values are input as numbers between 0 and 255, and alpha values as a number between 0.0 and 1.0. For example, the following value produces fully opaque blue text: rgba(0, 0, 255, 1.0) This next example instead produces partially transparent blue text: rgba(0, 0, 255, 0.5) NOTE: The .PNG and .TIFF file formats support 32-bit RGBA images, as do professional image editor formats like .PSD. While the .GIF file format supports transparency, it does not include an alpha channel but instead defines a single color in its lookup table as transparent. Updated April 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rgba Copy

Ribbon

Ribbon The ribbon is a user interface element created by Microsoft, which was introduced with Microsoft Office 2007. It is part of the "Microsoft Office Fluent" interface and combines the menu bar and toolbar into a single floating pane. By default, the ribbon is located at the top of the screen in Office applications, such as Access, Excel, PowerPoint, Word, and Outlook. The purpose of the ribbon is to provide quick access to commonly used tasks within each program. Therefore, the ribbon is customized for each application and contains commands specific to the program. Additionally, the top of the ribbon includes several tabs that are used to reveal different groups of commands. For example, the Microsoft Word ribbon includes Home, Insert, Page Layout, References, and other tabs that each display a different set of commands when selected. Since the ribbon contains both the program's menu options and toolbar commands, it cannot be removed from the screen. However, it can be minimized to free up more screen real estate for the primary document window. To minimize the ribbon, you can either click the upside down eject icon at the top of the ribbon or use the keyboard shortcut Control+F1. Once the ribbon has been minimized, it can be restored by clicking the same icon or using the same keyboard shortcut. The ribbon is the primary user interface (UI) element in both Office 2007 and Office 2010. Office 2011 for Mac includes also includes the ribbon, but has a slightly different layout. Additionally, other software companies can license the Microsoft Office UI, allowing them to include a custom ribbon in their programs. For example, Autodesk AutoCAD 2009 and later includes the ribbon at the top of the primary program window. Updated January 11, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ribbon Copy

Rich Text

Rich Text Rich text is text that supports various types of formatting. Unlike plain text, which consists only of unformatted text, rich text allows the user to apply basic styles like bold, italics, and underlining; they may also change the font, font size, and text color. Rich text also supports paragraph formatting that lets you change text alignment and line spacing. Automatic list formatting allows both numbered and bulleted lists, with options for numbering and bullet styles. You can add a simple table to a rich text file to show off data. You can also change page layout options to customize the paper size, margins, and page orientation. Some rich text editors even allow you to embed images. Some of the font formatting possible using rich text Word processors like Microsoft Word, Apple Pages, and OpenOffice Writer create rich text documents. However, each program has its own proprietary default document format; you must convert a document before opening it in another word processor. The generic Rich Text Format, which uses the .RTF file extension, is the most widely supported among word processors and can be used as an alternative to share a rich text file with any word processor. Updated March 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rich_text Copy

Right-Click

Right-Click Most computer mice have at least two buttons — the left mouse button is considered the primary button, and the right one is the secondary button. A right-click is the act of clicking the right mouse button and often triggers a secondary interaction like opening a contextual menu. A right-click is also known as a "secondary click." Laptop touchpads, even those without physical buttons, also support right-clicking. Some touchpads have a pair of buttons below the touch surface that work just like mouse buttons; the left button is the primary button, and the right one is secondary. Touchpads without buttons, which allow you to click by pressing anywhere on the touchpad surface, let you right-click by either pressing with two fingers at the same time or sometimes by pressing in the lower-right corner of the touchpad. Windows and macOS both allow you to customize this behavior. Right-clicking is often used to open contextual menus, which are pop-up menus that change contents depending on where and what you click. For example, if you right-click on your desktop, you may see a menu that includes options to change view settings and the desktop wallpaper; if you right-click on a folder instead, the menu would show options to open the folder or view its properties. Some programs, like video games, may use a right-click to perform other secondary functions. However, most programs use a right-click to open contextual menus that let you customize the selected tool or object. Updated January 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/right-click Copy

RIP

RIP Stands for "Routing Information Protocol." RIP is a protocol used by routers to exchange routing information on a network. Its primary functions are to 1) determine the most efficient way to route data on a network and 2) prevent routing loops. RIP maintains a routing table, which lists all routers reachable within a network. Each router uses this table to determine the most efficient way to route data. RIP incorporates distance-vector routing, which calculates the best path based on the direction and distance between routers. Each packet is forwarded to the appropriate routers until the packet reaches its destination. RIP also prevents endless routing loops by limiting the number of "hops" between the source and destination. A hop is recorded each time a packet is forwarded from one router to another. The maximum number of hops allowed by RIP is 15. If the hop count hits 16, RIP determines the destination is not reachable and the transfer is terminated. Three different versions of RIP exist: RIPv1 - standardized in 1988; uses "classful" routing, which defines an IP class, but does not include subnet information RIPv2 - developed in 1993 and standardized in 1998; uses "classless" routing and carries subnet information; supports MD5 authentication RIPng - standardized in 1997; an extension of RIPv2 that supports IPv6 NOTE: Most modern networks use newer routing methods, such as OSPF (Open Shortest Path First), rather than RIP to route data. Updated January 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rip Copy

Ripcording

Ripcording Recording audio with a computer involves capturing an audio signal and saving it digitally on a hard drive. Ripping an audio track is the process of converting an audio file to an MP3 or other compressed audio format. Ripcording is the simultaneous recording and ripping of an audio signal. Ripcording is a popular way to download and archive Internet radio broadcasts, digital cable TV radio, and satellite radio. All you need is ripcording software that will record audio and compress it at the same time. By ripcording live audio, you can save audio streams on your hard drive and listen to them whenever you want. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ripcording Copy

Ripping

Ripping Ripping is the process of copying data from a CD or DVD to a hard disk. This is done using a software utility, often referred to as a CD or DVD "ripper," which can extract audio or video data from optical media. While "ripping" sounds destructive, the process does not affect original data. However, the copied data may be modified during the ripping process so that it can be played on a computer. For example, DVD ripping programs may convert .VOB video files (which are formatted for DVD players) to standard .MPG files. The resulting video data can be played back using a basic video player program. Once data has been ripped from a disc, it can be played back without requiring the original disc to be present. Therefore, ripping is useful for archiving media on your computer or for backing up audio and video data. However, most commercial CDs and DVDs are copyrighted, meaning you are not allowed to copy the data to other discs or redistribute the data from your computer. Also, it is typically a violation of copyright to rip data from a disc that you do not own. Therefore, make sure you know are familiar with the usage policy of the media before you rip it to your hard disk. Updated December 23, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ripping Copy

RISC

RISC Stands for "Reduced Instruction Set Computing" and is pronounced "risk." RISC is a type of processor architecture that uses fewer and simpler instructions than a complex instruction set computing (CISC) processor. RISC processors perform complex instructions by combining several simpler ones. Several CPUs in the 1990s and early 2000s used RISC architecture. One of the most popular was the IBM PowerPC processor, which Apple used in its PowerMac line of computers for nearly a decade. In 2006, Apple switched to CISC-based Intel CPUs. Nearly all personal computers now use CISC processors made by Intel or AMD. RISC vs CISC RISC processors require fewer cycles per second than CISC processors. A RISC processor may complete more operations per second than a CISC processor running at the same clock speed. RISC processors simplify pipelining (performing multiple instructions at once) compared to CISC processors. Smaller instructions are easier to synchronize or "pipeline" than larger ones. Since RISC processors store fewer instructions than their CISC counterparts, less transistors are required to store instructions. Therefore, RISC processors can use more transistors to store memory. CPU transistors are now a fraction of the size they were two decades ago, so storing fewer instructions is less advantageous. Since modern computers perform a wide array of complex instructions, CISC processors typically provide better overall performance than RISC alternatives. Updated February 14, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/risc Copy

RJ45

RJ45 RJ45 is a type of connector commonly used for Ethernet networking. It looks similar to a telephone jack, but is slightly wider. Since Ethernet cables have an RJ45 connector on each end, Ethernet cables are sometimes also called RJ45 cables. The "RJ" in RJ45 stands for "registered jack," since it is a standardized networking interface. The "45" simply refers to the number of the interface standard. Each RJ45 connector has eight pins, which means an RJ45 cable contains eight separate wires. If you look closely at the end of an Ethernet cable, you can actually see the eight wires, which are each a different color. Four of them are solid colors, while the other four are striped. RJ45 cables can be wired in two different ways. One version is called T-568A and the other is T-568B. These wiring standards are listed below: T-568A White/Green (Receive +) Green (Receive -) White/Orange (Transmit +) Blue White/Blue Orange (Transmit -) White/Brown Brown T-568B White/Orange (Transmit +) Orange (Transmit -) White/Green (Receive +) Blue White/Blue Green (Receive -) White/Brown Brown The T-568B wiring scheme is by far the most common, though many devices support the T-568A wiring scheme as well. Some networking applications require a crossover Ethernet cable, which has a T-568A connector on one end and a T-568B connector on the other. This type of cable is typically used for direct computer-to-computer connections when there is no router, hub, or switch available. Updated July 1, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rj45 Copy

ROM

ROM Stands for "Read-Only Memory." ROM is a common hardware component found in electronic devices that stores the basic instructions that the device needs to operate. Unlike RAM, which is volatile temporary memory that is erased when powered off, ROM is non-volatile memory, where data is written to the chip and stays there even when removed from power. ROM chips are found in all electronic devices that need software instructions to carry out a task, from computers and smartphones to cars, microwave ovens, and electronic children's toys. The software instructions written to a ROM chip are called firmware. Computers use a special ROM chip that contains a type of firmware called the BIOS to handle the basic tasks required to boot up and get the other components started. There are several different types of ROM chips that can be used, depending on the technological needs of the device the ROM is used in. Masked ROM is the earliest form of ROM, where the software instructions are physically encoded directly into the circuits when the chip is manufactured. Programmable ROM, or PROM, is a ROM chip that is blank when manufactured and can be permanently written to a single time. Erasable Programmable ROM, or EPROM, is a type of programmable ROM that can be erased by exposing the circuits to ultraviolet light and written to again. Electronically Erasable Programmable ROM, or EEPROM, can be erased electronically and rewritten. This type of ROM allows for easy firmware updates on devices that use it, without requiring any other equipment. Flash Memory is a type of EEPROM that can be rewritten quickly. It used to be significantly more expensive than other types of ROM, limiting its use, but is now much cheaper and is the most common type of ROM chip technology. Updated September 23, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rom Copy

Root

Root In the computer world, "root" refers to the top-level directory of a file system. The word is derived from a tree root, since it represents the starting point of a hierarchical tree structure. The folders within the tree represent the branches, while the actual files are considered the leaves. However, unlike a real life tree, a data tree can be visualized upside down, with the root at the top and directories and subdirectories spanning downward. The root node of a file system is also called the root directory. On a Windows-based PC, "C:" represents the root directory of the C drive. On Macintosh and Unix systems

Root Directory

Root Directory The root directory, or root folder, is the top-level directory of a file system. The directory structure can be visually represented as an upside-down tree, so the term "root" represents the top level. All other directories within a volume are "branches" or subdirectories of the root directory. While all file systems have a root directory, it may be labeled differently depending on the operating system. For example, in Windows, the default root directory is C:. On Unix systems and in OS X, the root directory is typically labeled simply / (a single forward slash). As you move up directories within a file system, you will eventually reach the root directory. Website Root Directory The root directory of a website is not the top-level directory of the volume, but rather the top-level directory of the website folder. On an Apache server, for example, the name of the root directory is public_html and is located within primary user folder for the website. Therefore, the root folder of a website can be accessed using a single forward slash. For example, a link to a file called home.html within the root directory of a website can be accessed using the reference "/home.html." Updated August 5, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/root_directory Copy

Rooting

Rooting Rooting an Android device is the process of granting a device's user root (or administrative) privileges. Most Android device manufacturers restrict user and app permissions by default, so rooting a device gives access to commands, system files, and folders that are otherwise locked. Root access on an Android device is equivalent to superuser permissions on a Unix or Linux system, or jailbreaking an iOS device. The steps required to root a device vary by device and manufacturer, but most follow a general outline. First, you need to unlock the device's bootloader to allow it to boot a custom ROM disk image. After that, you need to flash the custom ROM to the device — replacing the stock Android operating system with a version that supports root access. Finally, you need to install a root management app that helps you modify the system settings and manage which apps have root access. Benefits and Drawbacks There are several common reasons for rooting an Android device. First, a rooted device gives you many more customization options than a stock device. Many custom ROMs allow you to theme the operating system, changing the appearance of nearly every part of the user interface. Rooting also allows you to remove pre-installed apps that you don't want. You can also install new apps that require root access to do something they wouldn't be able to do otherwise, like backing up other apps' data. Some utility apps can tweak hardware settings to improve performance or battery life, like over- or under-clocking the phone's CPU. Rooting a device does carry some significant risks. While some manufacturers allow users to root their devices and include official instructions for it, others don't — it may void the warranty to even try. Root access makes malware more dangerous, so you must be careful to only install trustworthy apps. Some apps may not work properly on a rooted device due to bugs or security limitations. For example, banking software often blocks rooted devices as a security measure; other apps that rely on GPS location may block rooted devices to prevent GPS spoofing apps. Finally, a mistake during the rooting process may result in a bricked device that no longer works at all. Updated April 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rooting Copy

Rootkit

Rootkit A rootkit is a software program designed to provide a user with administrator access to a computer without being detected. Rootkits are considered one of the most serious types of malware since they may be used to gain unauthorized access to remote systems and perform malicious operations. The name "rootkit" includes the word "root," because the goal of a rootkit is to gain root access to a computer. By logging in as the root user of a system, a hacker can perform nearly any operation he or she wishes. This includes installing software and deleting files. The word "kit" refers to the software files that make up the rootkit. These may include utilities, scripts, libraries, and other files. Rootkits often work by exploiting security holes in operating systems and applications. Others create a "back door" login to the operating system, which allows a user to bypass the standard login procedure when accessing a system. Once root access has been enabled, a rootkit may attempt to hide any traces of unauthorized access by modifying drivers or kernel modules, hiding certain files, and quitting active processes. Fortunately, most operating systems and software programs are designed to prevent unauthorized access via rootkits or other malware. Therefore, it is difficult to use a rootkit to gain access to modern systems. However, rootkits are constantly modified and updated in order to try and breach security holes. Therefore, it is wise to install antivirus or other security software on your computer to monitor any attempts of unauthorized access to your system. Updated May 24, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rootkit Copy

Rosetta

Rosetta Rosetta is a series of software compatibility layers used in several versions of macOS, during periods when Apple transitioned its computer lineup from one processor architecture to another. They allow users to continue using older applications on new computers while giving developers time to update their software. The first version of Rosetta allowed applications written for PowerPC-based Macs to run on Intel-based ones; the second version allowed Intel applications to run on Apple Silicon Macs. The name "Rosetta" comes from the Rosetta Stone, the artifact used to translate Egyptian hieroglyphs. Mac OS X 10.4.4 "Tiger" included the first version of Rosetta during Apple's transition to Intel x86 processors. It operated without any user intervention, automatically translating PowerPC applications into Intel-compatible versions as they ran. Since this process requires processing power, some applications ran slower through Rosetta on Intel-based Macs than on PowerPC processors. This version of Rosetta was removed from Mac OS X in version 10.7 "Lion" after the transition to Intel was completed. A second version of Rosetta was introduced in macOS 11 "Big Sur," as Apple began another processor architecture transition from Intel x86-64 processors to their own ARM-based Apple Silicon processors. This version of Rosetta translates applications written for Intel Macs to run on Apple Silicon and does so when the apps are installed rather than at runtime. Translating them ahead of time means that most applications run just as fast, if not faster, through Rosetta on Apple Silicon than on the Intel Macs they were written for. NOTE: You can check an application's architecture by selecting the application icon in the Applications folder and choosing "Get Info" from the File menu (or pressing Command-I). If the Kind is listed as "Application (Intel)," it is an Intel app that will run through Rosetta; if it says "Application (Universal)," the application's developer compiled it for both Intel and Apple Silicon processors; if it appears listed as "Application (Apple Silicon)," it was developed only for Apple Silicon Macs and will not run on Intel Macs. Updated February 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rosetta Copy

Router

Router A router is a hardware device that directs traffic between networks and devices. It receives incoming data packets from another network and routes them either to the packet's destination computer on the local network, or to the next network along its path. A router can also keep logs of activity, run a firewall, and manage the network to prioritize certain traffic and devices. Home routers use DHCP to assign IP addresses to every computer and device on the network. A typical home router includes a few Ethernet ports to plug computers and other devices into, but the network a router creates is not limited to the number of ports it has. Connecting a switch (or multiple switches) to a router can expand the number of devices that can be physically connected to the network, all managed by the router. Many home routers also create (Wi-Fi) networks that allow wireless access to a LAN. TP-Link AX1800 home router Even though some home routers provided by ISPs also include a built-in modem, it is important to note that modems and routers serve two different functions. A modem converts the digital signal from a computer network into a format that can be transmitted over telephone or cable lines to provide a connection to the Internet. A modem is assigned an IP address by the ISP. A router coordinates all of the data traffic on a local area network, between individual computers and the wider Internet through a connected modem. Core Routers The largest, most powerful routers are called core routers, and they manage the Internet's backbone by sending data all around the world. ISPs have their own routers to direct Internet traffic between their customers and other ISPs. Small home routers coordinate traffic between computers on a local area network (LAN) and the Internet. Data sent from one computer to another over the Internet will pass through a large number of routers on its journey. Updated September 22, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/router Copy

Row

Row A row is a horizontal group of values within a table. It contains values for multiple fields, which are defined by columns. Because rows contain data from multiple columns, in databases, each table row may be considered a record. For example, a row (or record) from a Employee table may contain an employee's name, address, position, salary, and other information. When querying a database, the results are typically returned as an array of rows, which is similar to a group of records. Individual values can be accessed by selecting a specific column (or field) within a row. NOTE: When displaying data in a table format, the top row is often called the table header. The cells in the row typically contain the name of each field. Updated June 7, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/row Copy

RPC

RPC Stands for "Remote Procedure Call." An RPC is a message instructing one computer to execute code on behalf of another computer over a network. It allows a program running locally on one computer to offload a procedure (a block of code that performs a specific task) to another program on a server, which returns the result. The program that issued the RPC then treats the result of the remote procedure the same as it would treat the result of one that ran locally. RPCs utilize a client-server model, where multiple client computers may connect to a server to send instructions and receive data. When a program issues an RPC, it includes a set of parameters to instruct the server what procedure to run and how to run it. For example, a client computer sending an RPC message to DNS server would include the domain name it wants to resolve as a parameter; another message sending a print job to a network printer would include print settings like paper size and orientation as parameters. A client making an RPC call to a server, which processes the request and sends a response There is no single RPC protocol. Instead, the idea of RPCs is the concept that many protocols use to enable remote execution of code. Some common RPC protocols include XML-RPC (which formats an RPC using XML markup) and JSON-RPC (which formats an RPC using JSON). However, all RPC protocols send messages in a similar way. A client program first issues a procedure call to a piece of client-side code, called a stub, that acts as a proxy for that procedure. The stub communicates with the server to tell it what procedure to run and what parameters to use. The server performs the procedure and sends the result back to the stub, when returns the result to the program that issued the RPC. Updated November 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rpc Copy

RPM

RPM Stands for "Revenue Per Mille." The word "mille" means "one thousand" in Latin, so RPM is short for "Revenue Per 1,000 Impressions" in online advertising. It is similar to CPM, but measures the revenue from 1,000 ad impressions instead of the cost of the ads. Advertisers typically focus on CPM, while publishers monitor RPM. RPM is calculated from ad impressions and overall revenue. For example, a website that gets 20,000 page views per day and generates $300 in daily ad revenue has a page RPM of $15.00. $300 revenue / 20,000 page views x 1,000 = $15.00 RPM If a website gets 10,000 page views each day and the page RPM is $5.00, the website will produce $50/day of advertising revenue. 10,000 page views x ($5.00 / 1,000 RPM) = $50 revenue Page RPM vs Impression RPM Two specific types of RPM are "Page RPM" and "Impression RPM." The above examples use page RPM since they are based on page views. Impression RPM, or ad unit RPM, measures the revenue per 1,000 impressions of a specific ad unit. If a webpage only has one banner ad, the page RPM and impression RPM will be identical. If a page has more than one ad unit (which is often the case), the individual ads will typically have lower impression RPMs than the page RPM. If certain ads have low RPMs, a publisher may replace or remove them with the goal of generating higher page RPMs. Page RPM vs eCPM In some cases, RPM and CPM are used interchangeably. While CPM means "Cost Per 1,000 Impressions," some advertising platforms use CPM in publisher reports instead of RPM. Specifically, the metric "eCPM" or "effective CPM" is sometimes used in place of "page RPM." While "page RPM" is more accurate, "eCPM" in a publisher advertising report means the same thing. Revolutions Per Minute Outside of advertising, RPM typically stands for "Revolutions Per Minute." In hardware specifications, RPM may refer to fan speed or hard drive rotational speed. Faster HDD speeds provide lower seek times, which translate to faster data access. The standard rotational speed of a 3.5" hard drive in a desktop computer is 7,200 RPM, while portable hard drives often run at 5,400 RPM. High-end server HDDs run at 10,000 or even 15,000 RPM. Updated October 3, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rpm Copy

RSS

RSS Stands for "Really Simple Syndication." RSS is a method of publishing online content feeds using a standard XML format. Websites use RSS to provide a feed of new articles, blog posts, podcasts, and other content that gets automatically pushed to anyone subscribing to that feed. People can subscribe to RSS feeds to read articles in feed reading software and get podcasts in a podcast player. Subscribers to an RSS feed can read articles and blog posts within a feed reader, either a standalone application or in a web browser using a browser extension. A subscriber just needs to add the RSS feed URL to the reader application, which will automatically download articles and sort them chronologically as they come in. NetNewsWire is a popular RSS feed reader application Podcasts also use RSS to distribute episodes to subscribers. Podcast player apps let listeners subscribe to a podcast by adding its RSS feed, automatically downloading new episodes as soon as they are released. The XML used by RSS includes metadata like episode titles, artwork, and show notes that provide extra information about each episode. RSS feeds for news publishers are declining in popularity following the rise of social media. Sites like Facebook and Twitter provide both a live feed of news articles and a feed of discussion about those articles—for better or worse—which is more valuable to the publishers and advertisers. NOTE: RDF may also stand for "RDF Site Summary," or "Rich Site Summary," but "Really Simple Syndication" is most common. Updated October 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rss Copy

RTE

RTE Stands for "Runtime Environment." As soon as a software program is executed, it is in a runtime state. In this state, the program can send instructions to the computer's processor and access the computer's memory (RAM) and other system resources. When software developers write programs, they need to test them in the runtime environment. Therefore, software development programs often include an RTE component that allows the programmer to test the program while it is running. This allows the program to be run in an environment where the programmer can track the instructions being processed by the program and debug any errors that may arise. If the program crashes, the RTE software keeps running and may provide important information about why the program crashed. When you see the name of a software program with the initials "RTE" after it, it usually means the software includes a runtime environment. While developers use RTE software to build programs, RTE programs are available to everyday computer users as well. Software such as Adobe Flash Player and Microsoft PowerPoint Viewer allow Flash movies and PowerPoint presentations to be run within the player software. These programs provide a runtime environment for their respective file formats. The most common type of RTE, however, is the Java RTE (or JRE), which allows Java applets and applications to be run on any computer with JRE installed. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rte Copy

RTF

RTF Stands for "Rich Text Format." RTF is a file format for rich text documents that may include styled text and images. It is a widely-supported document format that most text editors and word processors can open and edit. Microsoft first developed the RTF format as an interchange format, allowing users to save Word documents in a format that maintained most of Word's features but could be opened in other applications. The RTF format supports most rich text formatting, including font styling (like bold, italics, and underline), custom typefaces, and multiple font sizes. It also includes page layout options allowing you to customize page margins, line spacing, tab width, and text alignment. You can add annotations and comments, insert tables, and create automatically-numbered and bulleted lists. You can also embed images inside an RTF file, including bitmap images, PNGs, GIFs, and JPEGs. An RTF document in Microsoft WordPad Most word processors and text editors supporting rich text can open and edit RTF files. However, not every application that can open the RTF format supports every available feature. Some formatting options, image types, tables, annotations, and comments are often not supported by basic text editors like Windows WordPad and macOS TextEdit. If you don't need to share a text document with other people using other word processors, you'll have more features available using a word processor's default file format. File Extension: .RTF Updated May 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/rtf Copy

RTMP

RTMP Stands for "Real-Time Messaging Protocol." RTMP is a protocol designed for transmitting audio and video over the Internet. It is used to stream multimedia content on demand and also supports live streaming. Streaming over RTMP requires a media server, such as Adobe Media Server (AMS). Clients connect to the streaming server over TCP, typically using port 1935. RTMP stream URLs typically use a format such as: rtmp://stream.domain.com/livestream Most web browsers do not recognize the rtmp:// prefix. Instead, streamers can publish an RTMP URL (provided by a media server) to a streaming service that accepts RTMP input. Examples include Vimeo, YouTube Live, or Facebook Live. The service then broadcasts the stream via the corresponding website or app. History Macromedia created RTMP. Adobe acquired Macromedia in 2005, including the rights to RTMP, which was part of the Flash technology platform. While RTMP can stream Flash formats (such as .SWF, .FLV, and .F4V), it supports many other types of audio/video transmissions as well. Since the use of Flash has decreased, RTMP has become more commonly used to stream standard compressed video audio and video rather than Flash media. Adobe has also made RTMP an open specification, which can be used by third-party applications. Examples include Nimble Streamer, Open Broadcaster Software, and Wowza Streaming Engine. NOTE: As of 2020, sales and support for Adobe Media Server, Flash Media Live Encoder, and the RTMP SDK are provided by Veriskope rather than Adobe. Updated March 14, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rtmp Copy

Ruby

Ruby While in the physical world, "ruby" refers to a red gemstone, in the computer world, ruby is an object-oriented programming language. If a special woman in your life asks for a ruby for her birthday, I would recommend choosing the gemstone for the gift. The Ruby programming language was created by Yukihiro Matsumoto and is named after the birthstone of one his colleagues. Interestingly, the pearl (as in the Perl language) is the June gemstone, while ruby is the July gemstone. This makes the subtle suggestion that Ruby is a step forward from Perl. Like Perl, Ruby's strength lies in it's simplicity. The syntax is very basic and it is completely object-oriented. This means every type of data handled by the language is treated as an object, even data types as simple as integers. The source code can be interpreted by the official Ruby interpreter or by JRuby, a Java-based interpreter. Ruby is an open-source language, like PHP, which means it is free to download and use. It can be compiled and run on just about any operating system, including Unix, Windows, and Mac OS X. For more information on Ruby and to download the Ruby software, visit the Ruby Home Page. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ruby Copy

Runtime

Runtime Runtime refers to the time a program runs on a computer or device. During runtime, the computer's CPU executes the commands in the program's machine code. It starts when you first open a program and the operating system loads it into RAM, along with any extra libraries or external frameworks the program calls for. The runtime ends when you close or quit the program, and the memory allocated to it becomes available for other programs. In a software development context, the word "runtime" also refers to the environment a software developer designs a program to run within. The runtime environment consists of the operating system and linked code libraries. For example, programs that run in Windows make calls to dynamically linked library files (DLLs) to execute various standard functions. The runtime environment may also include virtual machines or interpreter programs necessary for the program to run. A runtime error is an error, or bug, that happens while the program is running. The term differentiates these errors from errors that occur while the program is compiling, like syntax errors and other compilation errors. Updated April 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/runtime Copy

Runtime Error

Runtime Error A runtime error is a program error that occurs while the program is running. The term is often used in contrast to other types of program errors, such as syntax errors and compile time errors. There are many different types of runtime errors. One example is a logic error, which produces the wrong output. For example, a miscalculation in the source code or a spreadsheet program may produce the wrong result when a user enters a formula into a cell. Another type of runtime error is a memory leak. This type of error causes a program to continually use up more RAM while the program is running. A memory leak may be due to an infinite loop, not deallocating unused memory, or other reasons. A program crash is the most noticeable type of runtime error since the program unexpectedly quits while running. Crashes can be caused by memory leaks or other programming errors. Common examples include dividing by zero, referencing missing files, calling invalid functions, or not handling certain input correctly. NOTE: Runtime errors are commonly called referred to as "bugs," and are often found during the debugging process, before the software is released. When runtime errors are found after a program has been distributed to the public, developers often release patches, or small updates, designed to fix the errors. Updated April 27, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/runtime_error Copy

RUP

RUP Stands for "Rational Unified Process." RUP is a software development process from Rational, a division of IBM. It divides the development process into four distinct phases that each involve business modeling, analysis and design, implementation, testing, and deployment. The four phases are: Inception - The idea for the project is stated. The development team determines if the project is worth pursuing and what resources will be needed. Elaboration - The project's architecture and required resources are further evaluated. Developers consider possible applications of the software and costs associated with the development. Construction - The project is developed and completed. The software is designed, written, and tested. Transition - The software is released to the public. Final adjustments or updates are made based on feedback from end users. The RUP development methodology provides a structured way for companies to envision create software programs. Since it provides a specific plan for each step of the development process, it helps prevent resources from being wasted and reduces unexpected development costs. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/rup Copy

S/PDIF

S/PDIF Stands for "Sony/Philips Digital Interface" (and is pronounced "spid-if"). S/PDIF is a digital audio transmission standard for transferring audio between two devices. It is uni-directional (one-way) and supports uncompressed stereo audio and compressed surround sound audio. A S/PDIF audio signal may be transmitted over coaxial or fiber optic cable. Coax transmissions use RCA connectors, while optical transmissions use Toslink connectors. Regardless of the transfer medium, a S/PDIF signal is always digital, not analog. Since S/PDIF supports multiple types of connections, S/PDIF ports on AVRs and other devices may not be uniform. Most modern devices that support S/PDIF provide an optical Toslink connection. S/PDIF supports uncompressed 2-channel (stereo) audio, making it an ideal interface for CD players, turntables, and other stereo audio devices. While S/PDIF supports surround sound signals, it does not have enough bandwidth to transmit more than two channels of uncompressed digital audio. Therefore, S/PDIF must transfer 5.1 and 7.2 surround sound audio in a compressed format, such as Dolby Digital. Specifically: S/PDIF Supports: 2-channel uncompresed 16-bit, 44.1 kilohertz stereo audio Compressed surround sound audio (Dolby Digital, DD+, and DTS) S/PDIF Does Not Support: More than two channels of uncompressed audio Uncompressed surround sound audio (Dolby TrueHD and DTS-HD Master Audio) Uncompressed surround sound audio requires a high-bandwidth digital connection, such as HDMI or ADAT. Updated August 15, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spdif Copy

SaaS

SaaS Stands for "Software as a Service." SaaS is software that is deployed over the Internet rather than installed on a computer. It is often used for enterprise applications that are distributed to multiple users. SaaS applications typically run within a Web browser, which means users only need a compatible browser in order to access the software. SaaS is considered part of cloud computing since the software is hosted on the Internet, or the "cloud." Because SaaS applications are accessed from a remote server rather than installed on individual machines, it is easy to maintain the software for multiple users. For example, when the remote software is updated, the client interface is also updated for all users. This eliminates incompatibilities between different software versions and allows vendors to make incremental updates without requiring software downloads. Additionally, users can save data to a central online location, which makes it easy to share files and collaborate on projects. Several types of SaaS applications are available. For example, Google offers a suite of online applications called Google Apps. These include Google Docs, which allows users to create and share documents online, Google Sites, which enables users to collaborate on projects via a custom Web interface, and several other applications. Intuit offers online financial management software through Mint.com and provides tax software via TurboTax online. Microsoft's Windows Live service provides Web versions of Microsoft Office programs, such as Word, Excel, and PowerPoint. Documents created with the online Office applications can be saved on a user's SkyDrive and shared with other Windows Live users. SaaS is also common in the medical field, where doctors use online software to save, update, and share patient records. Some SaaS software is free to use, while other online programs require an upfront payment or a monthly fee. Enterprise SaaS applications often require a commercial license, but online software licenses are usually less expensive than individual software licenses. Because of the many benefits of SaaS, it is becoming an increasingly common way to distribute software. Updated January 31, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/saas Copy

Safe Mode

Safe Mode Safe Mode is a special diagnostic mode for booting into an operating system. It starts the computer with only the bare minimum system files required to boot, disabling all third-party drivers and utilities. It can help a computer boot up when it otherwise wouldn't in order to troubleshoot and resolve problems. Safe Mode can help you narrow down the cause of problems and take steps to fix them. For example, if Windows fails to boot normally but successfully loads in Safe Mode, you know that the problem stems from software rather than hardware; you can uninstall drivers and applications that may be causing the issue and then attempt to boot normally. You can also run diagnostic software to repair corrupted files or directories that could cause the operating system to crash or fail to boot. Safe Mode is most often associated with Windows, but other operating systems have safe modes. You can start older versions of Windows in Safe Mode by holding the F8 key as it boots, but Windows 8.1 and later instead automatically display options to start in Safe Mode after several failed boot attempts. You can access the macOS version of Safe Mode, called Safe Boot, by holding the Shift key as the computer restarts. Updated February 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/safemode Copy

Samba

Samba Samba is an open-source software implementation of the Server Message Block (SMB) protocol. It allows users on non-Windows platforms to access network resources shared by Windows computers and servers. Without Samba, Windows and non-Windows computers would be isolated from one another, even while connected to the same LAN. Samba is available for free under the GNU General Public License, and most Unix and Linux distributions include it to support cross-platform file sharing. Samba implements the SMB protocol for both sides of the client-server model. It allows non-Windows clients to access network shares created on Windows servers while also allowing non-Windows servers to create SMB network shares. Unix and Linux users can use Samba to mount network shares as part of the computer's file structure, or they may also use software utilities to access network shares like they would access an FTP server. Ubuntu accessing an SMB network share using Samba Non-Windows computers can also use Samba to access other shared network resources, like network printers. Samba also supports Active Directory, which allows Unix and Linux systems to participate in Microsoft Active Directory domains. The embedded operating systems in network-attached storage (NAS) devices often use Samba to create SMB shares. NOTE: Older versions of macOS used Samba for SMB networking, but it was replaced by a custom implementation, SMBX, with macOS 10.9 Mavericks. Updated August 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/samba Copy

Sample

Sample A sample is a digital snapshot of an analog signal at a given point in time. All digital audio files, and some digital video files, consist of sampled analog signals. When all a signal's samples are combined, they can recreate a close approximation of the original signal. Computers use a component called an analog-to-digital converter (ADC) to analyze analog signals and convert them into digital signals that can be stored and edited on a computer. An ADC captures samples at a sample rate appropriate for the type of signal — several times a second for video (depending on the video's frame rate) to thousands of times per second for audio signals. It quantizes each sample to convert the analog value to a close digital approximation. A sample represents the value of a signal at a single moment A sample's bit rate determines how accurately it represents the original signal. A higher bit rate permits a larger range of possible values for each sample and allows subtler changes between them. For example, an audio signal sampled at 16 bits per sample allows each sample to cover 65,536 possible amplitude values; a signal sampled at 24 bits per sample could instead represent 16,777,216 possible values. NOTE: The word "sample" may also refer to a short audio clip that software or hardware can play when triggered. For example, a synthesizer keyboard may include a sample of a bird chirp sound that plays when you press a specific key. Updated March 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sample Copy

Sample Rate

Sample Rate In audio production, a sample rate (or "sampling rate") defines how many times per second a sound is sampled. Technically speaking, it is the frequency of samples used in a digital recording. The standard sample rate used for audio CDs is 44.1 kilohertz (44,100 hertz). That means each second of a song on a CD contains 44,100 individual samples. When an analog sound, such as a vocal performance, is sampled at a rate of tens of thousands of times per second, the digital recording may be nearly indistinguishable from the original analog sound. CDs use a sample rate of 44.1 KHz because it allows for a maximum audio frequency of 22.05 kilohertz. The human ear can detect sounds from roughly 20 hertz to 20 kilohertz, so there is little reason to record at higher sample rates. However, because digital audio recordings are estimations of analog audio, a smoother sound can be gained by increasing the sample rate above 44.1 KHz. Examples of high sample rates include 48 KHz (used for DVD video), 88.2 KHz (2x the rate of CD audio), and 96 KHz (used for DVD-Audio and other high definition audio formats). While audio aficionados may appreciate higher sample rates, it is difficult for most people to perceive an improvement in audio quality when the sample rate is higher than 44.1 Khz. A more effective way to improve the quality of digital audio is to increase the bit depth, which determines amplitude range of each sample. 16-bit audio, used in audio CDs, provides 216 or 65,536 possible amplitude values. 24-bit audio, used in high definition formats, can store 224 or 16,777,216 possible amplitude values – 256 times more than 16-bit audio. NOTE: Many DAW programs support sample rates up to 192 KHz. Recording at extremely high sample rates allows sound engineers to preserve the audio quality during the mixing and editing process. This can improve the end result of a song or audio clip even if the final version is saved with a sample rate of 44.1 Hz. Updated August 22, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sample_rate Copy

Sampling

Sampling Before digital recording took over the audio and video industries, everything was recorded in analog. Audio was recorded to devices like cassette tapes and records. Video was recorded to Beta and VHS tapes. The media was even edited in analog format, using multichannel audio tapes (such as 8-tracks) for music, and film reels for video recordings. This method involved a lot of rewinding and fast-forwarding, which resulted in a time-consuming process. Fortunately, digital recording has now almost completely replaced analog recording. Digital editing can be done much more efficiently than analog editing and the media does not lose any quality in the process. However, since what humans see and hear is in analog format (linear waves of light and sound), saving audio and video in a digital format requires converting the signal from analog to digital. This process is called sampling. Sampling involves taking snapshots of an audio or video signal at very fast intervals, usually tens of thousands of times per second. The quality of the digital signal is determined largely by the sampling rate, or the bit rate the signal is sampled at. The higher the bit rate, the more samples are created per second, and the more realistic the resulting audio or video file will be. For example, CD-quality audio is sampled at 44.1 kHz, or 44,100 samples per second. The difference between a 44.1 kHz digital recording and the original audio signal is imperceptible to most people. However, if the audio was recorded at 22 kHz (half the CD-quality rate), most people would notice the drop in quality right away. Samples can be created by sampling live audio and video or by sampling previously recorded analog media. Since samples estimate the analog signal, the digital representation is never as accurate as the analog data. However, if a high enough sampling rate is used, the difference is not noticeable to the human senses. Because digital information can be edited and saved using a computer and will not deteriorate like analog media, the quality/convenience tradeoff involved in sampling is well worthwhile. Updated January 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sampling Copy

SAN

SAN Stands for "Storage Area Network." A SAN is a specialized network of storage devices accessible by multiple servers or workstations. A SAN is a separate network from a LAN, with access between the two typically managed by file servers that connect to both networks and create a bridge. A file server sees the SAN storage pool as if it were a local disk connected directly to the server, and it can share that storage with the rest of the LAN. SANs are used by enterprises and other large organizations that need large pools of high-speed, low-latency network data storage. A file server can only connect to as many drives as it has physical interfaces for, so a SAN can provide more data storage capacity than a single file server could provide. Storage devices in a SAN are connected using high-speed Fibre Channel connections and managed by SAN switches. A file server equipped with a Fibre Channel interface can connect to a SAN switch and access its storage pool. Multiple file servers can independently connect to the same SAN to provide network administrators with more options for managing data access. Unlike a NAS or traditional network storage drive, a SAN stores data using block-level storage that manages and organizes data in fixed-sized blocks instead of by entire files. Block-level storage can provide higher data access speeds and lower latency by spreading files out across multiple physical drives. For example, if a SAN saves a 5 GB data file in blocks on five separate drives, each one only needs to read 1 GB of data — providing much quicker access than if one drive had to read all 5 GB by itself. In most cases, access to a SAN storage pool is provided by a file server acting as a bridge. However, sometimes it may be beneficial for a workstation to connect to a SAN directly and bypass any overhead from a file server. For example, a video editing workstation that requires high-speed access to large 8K video files can use a Fibre Channel interface to connect directly to a SAN storage pool for the fastest possible access speed. Updated November 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/san Copy

Sandboxing

Sandboxing Sandboxing is a software management strategy that isolates applications from critical system resources and other programs. It provides an extra layer of security that prevents malware or harmful applications from negatively affecting your system. Without sandboxing, an application may have unrestricted access to all system resources and user data on a computer. A sandboxed app on the other hand, can only access resources in its own "sandbox." An application's sandbox is a limited area of storage space and memory that contains the only resources the program requires. If a program needs to access resources or files outside the sandbox, permission must be explicitly granted by the system. For example, when a sandboxed app is installed in OS X, a specific directory is created for that application's sandbox. The app is given unlimited read and write access to the sandboxed directory, but it is not allowed to read or write any other files on the computer's storage device unless it is authorized by the system. This access is commonly granted using the Open or Save dialog box, both of which require direct user input. Sandboxed App Limitations While sandboxing provides added security for users, it can also limit the capabilities of an application. For example, a sandboxed app may not allow command line input since the commands are run at a system level. Utilities such as backup programs and keyboard shortcut managers may not be granted sufficient permissions to function correctly. For this reason, some programs cannot be sandboxed. NOTE: OS X has supported sandboxing since OS X Lion, which was released in 2011. The Mac App Store has required apps to be sandboxed since March 2012. Windows does not natively provide app sandboxing, but some apps (such as Microsoft Office programs) can be run in a sandboxed mode. Additionally, several Windows utilities allow you to run apps in a sandbox, preventing them from affecting the system or other applications. Updated July 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sandboxing Copy

Sass

Sass Stands for "Syntactically Awesome Style Sheets." Sass is an extension of cascading style sheets (CSS), the language used to define the layout and formatting of HTML documents. It uses fully-compatible CSS syntax, but provides additional features like CSS variables and nested rules that make CSS more efficient and easier to edit. One of the drawbacks of standard CSS is that it does not support variables. For example, if you have multiple styles that are the same color, you need to define the color separately for each style. If you decide to change the color, you must change it for every instance in the the CSS document. With Sass, you can define the color as a variable and assign the variable to every style that uses it. If you decide to change the color, you only need to change it once — where it is initially defined in the document. The below example shows how to define and use a CSS variable in Sass.   $myColor: #00695C;   .pageTop { background-color: $myColor; }   .infoText { color: $myColor; }   .pageBottom { background-color: $myColor; } Sass also supports nested rules, allowing developers to write more efficient code. In the example below, the .button class is nested within the #top p style.   #top p   {     color: #004D40;     .button     {       background-color: #039BE5;       color: #FFF;     }   } When compiled, the above code will produce the following CSS:   #top p { color: #004D40; }   #top p .button { background-color: #039BE5; color: #FFF; } While Sass provides several benefits to web developers, Sass documents are not recognized by web browsers. Therefore, a Sass file must first be compiled into CSS before being used in an HTML document. This can be done locally before uploading the CSS to the web server using program like Compass.app or Koala. It can also be compiled on the server using a PHP or Ruby script that compiles Sass into CSS. Updated June 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sass Copy

SATA

SATA Stands for "Serial Advanced Technology Attachment," or "Serial ATA." SATA is a standard bus interface for connecting storage devices like hard drives, solid-state drives, and optical drives to a computer's motherboard. It replaced the previous standard, parallel ATA, and provides much faster transfer speeds using smaller and simpler cables. The SATA standard has received several updates since its introduction in 2003. SATA 3, the third and most recent revision, supports a maximum data transfer speed of 6 Gbps (4.8 Gbps, or 600 MB/s, after accounting for data encoding overhead). The specification also allows the hot-swapping of devices so users can connect and disconnect drives without shutting down their computers. A set of SATA ports on a motherboard Both SATA drives and parallel ATA drives are IDE devices, which means they integrate the storage controller chip into the drives themselves. This adds more physical size and complexity to the drives but simplifies the connection to the motherboard. SATA cables are significantly thinner than parallel ATA cables, making them easier to organize inside a computer's case. SATA devices each have their own independent bus, unlike parallel ATA devices that share a single bus between two devices (requiring two drives to share one cable, and for the drives to have primary and secondary roles assigned using jumpers). NOTE: A variant of SATA, called eSATA, exists for external hard drive connections. Updated January 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sata Copy

Scalable

Scalable Scalable hardware or software can expand to support increasing workloads. This capability allows computer equipment and software programs to grow over time, rather than needing to be replaced. Scalable hardware may refer to a single computer system, a large network of computers, or other computer equipment. The scalability of a single computer, such as a workstation, depends on how expandable the computer is. In this context, the words "scalable," "expandable," and "upgradable" may be used interchangeably. For example, a computer that has multiple drive bays has scalable disk space, since more internal storage devices may be added. A computer that includes multiple PCI slots has scalable graphics and I/O capabilities since PCI cards may be added or upgraded. A scalable network should be able to support additional connections without data transfers slowing down. In each instance, scalable hardware can expand to meet increasing demands. Scalable software typically refers to business applications that can adapt to support an increasing amount of data or a growing number of users. For example, a scalable database management system (DBMS) should be able to efficiently expand as more data is added to the database. Scalable Web hosting software should make it easy to add new users and new Web hosting accounts. They key is that the software "grows" along with the increased usage. This means scalable programs take up limited space and resources for smaller uses, but can grow efficiently as more demands are placed on the software. The scalability of hardware and software is important to growing businesses. After all, it is typically more economical to upgrade current systems than replace them with new ones. While all hardware and software have some limitations, scalable equipment and programs offer a long-term advantage over those that are not designed to grow over time. Updated January 21, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scalable Copy

Scanner

Scanner A scanner is an input device that scans documents such as photographs and pages of text. When a document is scanned, it is converted into a digital format. This creates an electronic version of the document that can be viewed and edited on a computer. Most scanners are flatbed devices, which means they have a flat scanning surface. This is ideal for photographs, magazines, and various documents. Most flatbed scanners have a cover that lifts up so that books and other bulky objects can also be scanned. Another type of scanner is a sheet-fed scanner, which can only accept paper documents. While sheet-fed scanners cannot scan books, some models include an automatic document feeder, or ADF, which allows multiple pages to be scanned in sequence. Scanners work in conjunction with computer software programs, which import data from the scanner. Most scanners include basic scanning software that allows the user to configure, initiate, and import scans. Scanning plug-ins can also be installed, which allow various software programs to import scanned images directly. For example, if a scanner plug-in is installed for Adobe Photoshop, a user can create new images in Photoshop directly from the connected scanner. While Photoshop can edit scanned images, some programs like Acrobat and OmniPage can actually recognize scanned text. This technology is called optical character recognition, or OCR. Scanning software that includes OCR can turn a scanned text document into a digital text file that can be opened and edited by a word processor. Some OCR programs even capture page and text formatting, making it possible to create electronic copies of physical documents. Updated January 8, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scanner Copy

Scareware

Scareware Scareware, also known as "rogue security software," is software that uses false error messages to lure you into purchasing a software program. These alerts or warnings may appear on websites or within applications installed on your computer. When you click the associated download link, the software is downloaded to your computer. A typical scareware warning message will state that your computer is infected with a virus or malware without actually scanning your computer. The message will also recommend that you download software to "fix" the non-existent problem. Once you download the software, the installer may install spyware, adware, or other unwanted programs on your computer. In some cases, scareware will ask you to enter personal information, similar to a phishing scam. In order to avoid scareware schemes, it is helpful to check the source of security alerts that pop up on your computer. If the source is not known or credible, the warning message may not be legitimate. For example, if a website advertisement says your computer is infected, you can assume the message is false, since the website has no way of actually scanning the files on your computer. Installing a utility like Microsoft Security Essentials or a third-party Internet security program may also help you catch rogue security software before it can affect your computer. While some alerts and error messages may be scareware schemes, it is important to remember that other notifications may be legitimate. For example, the alerts provided Symantec, Kaspersky, AVG and other security programs are valid and should be taken seriously. By familiarizing yourself with the security software installed on your computer, you'll be able to determine which warnings are valid and which may be from scareware programs. Updated July 6, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scareware Copy

Schema

Schema A schema is an outline, diagram, or model. In computing, schemas are often used to describe the structure of different types of data. Two common examples include database and XML schemas. 1. Database Schema A database schema describes the tables and corresponding fields contained in a database. It may be displayed as a list of tables that each contain a sublist of fields along with the associated data type. More commonly, however, database schemas are displayed as visual diagrams. Boxes represent individual tables and lines show how the tables are connected. In some cases, these lines may include arrowheads to indicate the flow of data. Database schemas may also include comments that describe the purpose of each table and individual fields. 2. XML Schema An XML schema defines the elements that an XML file may contain. It provides a specific structure for XML data, which is important when sharing XML files between multiple systems. Defining an XML schema ensures an XML document or feed will not contain unknown values, which may cause parsing errors. Below is an example of an XML schema, or XML schema definition (XSD).       ,                                                   Schemas are most commonly used to describe databases and XML files, but they can also be used to describe other types of data. For example, a game developer may define a schema to describe 3D objects used in a video game. A software developer may use a schema to describe the structure of a file format used by an application. Updated September 26, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/schema Copy

Scraping

Scraping Scraping, or "web scraping," is the process of extracting large amounts of information from a website. This may involve downloading several web pages or the entire site. The downloaded content may include just the text from the pages, the full HTML, or both the HTML and images from each page. There are many different methods of scraping a website. The most basic is manually downloading web pages. This can be done by either copying and pasting the content from each page into a text editor or using your browser's File → Save As… command to save local copies of individual pages. Scraping can also be done automatically using web scraping software. This is the most common way to download a large number of pages from a website. In some cases, bots can be used to scrape a website a regular intervals. Web scraping may be done for several different purposes. For instance, you may want to archive a section of a website for offline access. By downloading several pages to your computer, you can read them at a later time without being connected to the Internet. Web developers sometimes scrape their own websites when testing for broken links and images within each page. Scraping can also done for unlawful purposes, such as copying a website and republishing it under a different name. This type of scraping is viewed as a copyright violation and can lead to legal prosecution. NOTE: While scraping a website for the purpose of republishing information is always wrong, scraping a site for other purposes may still violate the website's terms of use. Therefore, you should always read a website's terms of use before downloading content from the site. Updated September 22, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scraping Copy

Screen Burn

Screen Burn Screen burn or "screen burn-in" is a residual image left on a screen after displaying the same image for a long time. It is a faded version of the image or "ghost image" that covers part or all of the screen. Screen burn is usually caused by video or graphics with content that does not move or change for an extended period of time. Examples include the menu bar on a Mac or the task bar on a Windows computer. On a TV, it may be the logo of a specific channel or the information panel of a video game. On a smartphone, it may be the status bar at the top of the screen. If certain pixels consistently display the same image while other pixels display changing graphics, the unchanging pixels may leave behind a ghost image. Screen burn affects several types of screens, including CRTs, plasma displays, and OLED screens. In older displays, such as CRT monitors, leaving a single image displayed for a long time could physically damage the screen. This type of burn-in could produce dark spots that were noticeable even when the display was turned off. In modern displays, screen burn typically occurs when the electronics that produce light lose their luminance. When specific pixels display a single image for a long time, they may lose their red, green, or blue (RGB) brightness relative to other pixels. This may produce a ghost image that is the inverse of the image that was consistently displayed. Screen burn is rare with LCD panels because liquid crystals are less susceptible to burn-in. OLED displays, on the other hand, are susceptible to screen burn since individual LEDs may lose their luminance relative to other LEDs if overused. This may happen when certain pixels display the same color over a long period of time. How to Avoid Screen Burn Screen burn is rare on modern displays with normal use. For noticeable screen burn to take place, you would have to run the same program or tune to the same channel every day for several weeks or months. However, if you consistently play the same video game or watch the same channel, screen burn can happen over time. You can limit and possibly avoid screen burn by varying the content you watch on your screen. You should also turn off your display when you're not using it. Finally, it is wise to enable sleep mode or a screensaver that automatically runs when the screen has not changed for awhile. Updated November 21, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/screen_burn Copy

Screen Reader

Screen Reader A screen reader is a type of accessibility software utility that helps people with a visual impairment use a computer or mobile device. A screen reader converts on-screen text into spoken audio, braille output on a refreshable braille display, or both. Most operating systems include built-in screen reader software that supports reading out text in webpages, documents, and the system user interface. While the name "screen reader" may imply software that just reads the screen, most screen-reading software also offers new ways to interact with a computer. Accessibility APIs supported by operating systems allow screen readers to navigate between sections of the screen using keyboard shortcuts and describe the contents of each section. The software can speak the name of a button or menu, let you move between buttons or menus until you find what you need, then allow you to "click" it. The software can also automatically speak aloud notifications and alerts that appear on the screen. Screen readers will also narrate special descriptive text, embedded in images and documents, that doesn't appear on screen at all. Most screen readers also support special peripherals called refreshable braille displays. These devices include an array of 8-dot braille cells that raise dots to create a line of braille text. Since these cells are refreshable, they constantly change as new text is selected. These displays also include buttons that let you control a computer without moving your hand far away from the line of braille text. Updated January 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/screen_reader Copy

Screen Saver

Screen Saver A screen saver, or "screensaver," is an animation that appears on a screen when the device has been idle for a certain amount of time. Screensavers were created to prevent screen burn, where images would get physically "burned" into a screen when they did not change for a long time. For this reason, screensavers include continual animations or frequently changing images that do not stay in the same place very long. Screen burn was primarily an issue with CRT monitors, which were replaced by flat-screen displays in the early 2000s. While flat-screen devices are less susceptible to screen burn, users had become so used to screensavers, modern desktop operating systems such as Windows and macOS still include them. Additionally, screen burn may affect some LCD and OLED displays, including televisions, especially as they age. So screensavers still have a functional purpose today. Another way to avoid screen burn is to configure a device to turn off the display or enter sleep mode after a few minutes of inactivity. Screen Saver vs Desktop Background The terms "screen saver" and "desktop background" are sometimes used interchangeably, but they are two different things. A screensaver is an animation that appears on the screen when a device is idle. A desktop background, or wallpaper, is an image that stays on the desktop while the device is in use. Mobile devices typically have a desktop background, but no screen saver option. You can customize both the screen saver and desktop background in the Windows Control Panel or macOS System Preferences. Screensaver options typically include the screensaver type, animation options, and idle time before the screensaver begins. You can also choose a "hot corner" that allows you to start the screensaver immediately by dragging the cursor to a corner of the screen. Updated September 10, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/screen_saver Copy

Screen Tearing

Screen Tearing Screen tearing is graphics distortion that occurs when the graphics processor is out of sync with the display. It causes a horizontal line to appear during video playback or gameplay in a video game since the top section is out of sync with the bottom. When the GPU is under a heavy load, it may not be able to keep up with the refresh rate of the monitor. The result is that part of the screen is redrawn in one frame and the rest of the screen is redrawn in another. If this continues for multiple frames, a noticeable vertical line will appear on the screen. There are several ways to prevent or fix screen tearing. Lower the resolution of the video or game so that your GPU doesn't have to work as hard. Enable the "hardware acceleration" option in the affected application. For example, Google Chrome has a toggle for this setting. Enabling hardware acceleration can improve video playback within the web browser. Enable the "vertical sync" or "vsync" option. Some video games provide this option specifically to combat screen tearing. However, it may slow down the frame rate to match the refresh rate of your monitor. Enable "triple buffering" if the application allows it. This ensures the entire frame is passed to the final buffer before being sent to the display. Upgrade to a more powerful video card that can support the videos or games you play on your computer. If you are a serious gamer, it might be helpful to buy a display specifically designed to prevent screen tearing. For example, some displays include G-Sync technology, which syncs the display output with an NVIDIA GPU. Updated December 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/screen_tearing Copy

Screenshot

Screenshot A screenshot is an image taken of the contents of a computer's screen. It may include the entire screen — including the desktop, all open windows and menus, and even the mouse cursor — or it may include only a limited selection or a single window. Taking a screenshot is an easy way to save what you see on the screen in order to share it with someone else or reference it later. Screenshots taken of a computer's screen are either copied to the clipboard or saved as a file (typically a PNG image). Text within a screenshot is saved as part of the image and is not editable, nor can you move a window in a screenshot to see what was behind it. However, some screenshot utilities allow you to add your own markup to a screenshot after you take it, drawing shapes and entering text to add context to it. A screenshot taken in Windows and open in the Snip & Sketch tool, which lets you add markup to a screenshot Windows and macOS both provide keyboard shortcuts that let you quickly take screenshots. Since each operating system provides several options, you can choose exactly what you want to capture and how to save it: Windows: Print Screen copies the entire screen to the clipboard. Windows Logo Key + Print Screen saves a screenshot of the entire screen to your Pictures folder. Windows Logo Key + Shift + S in Windows 10 and later opens the snipping tool utility. This tool lets you choose between full-screen, rectangular selection, or specific window screenshots and even add a short delay. You can also annotate screenshots by adding drawings and text. macOS: Command + Shift + 3 saves a screenshot of the entire screen to a file. Command + Shift + 4 saves a rectangular selection of the screen to a file. Command + Shift + Control + 3 copies a screenshot of the entire screen to the clipboard. Command + Shift + Control + 4 copies a rectangular selection of the screen to the clipboard. Command + Shift + 5 opens a screenshot utility with more tools. You can choose whether to include the whole screen, a selection, or a specific window, as well as record a video of the screen instead of a still image. It also lets you choose which folder to save screenshots to, whether to include the mouse cursor, and provides several timer settings. Updated September 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/screenshot Copy

Script

Script A script is a list of programmatically-written instructions that can be carried out on command. Scripts can automate batch processes on a local computer or server, and are often used to generate dynamic webpages on a web server. Scripts are like small programs, written in a scripting language like VBScript, AppleScript, or PowerShell; unlike programs, they do not need to be compiled before running. The commands in a script may be interpreted by an individual program or an interpreter. For example, a Photoshop script can only contain actions for Photoshop to apply to an image, while a Visual Basic or AppleScript file instead can send commands to multiple separate programs. Web servers run scripts written in ASP, JSP, and PHP to generate dynamic webpage content. A basic Python script to add two numbers together Writing a script in a scripting language is similar to writing a program in a programming language. Scripting languages and programming languages share many common elements and syntax. The commands in a scripting language are often simple and human-readable, allowing you to open and edit scripts in a plain text editor. NOTE: Do not run unfamiliar or unknown scripts on your computer. Some scripting languages, like Visual Basic Script, can access and modify local files to potentially damage the contents of your computer and introduce malware. Updated February 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/script Copy

Script Kiddie

Script Kiddie A script kiddie is a hacker, often a young amateur, who uses scripts and programs written by others instead of developing their own. Script kiddies lack the knowledge to understand the details of how the tools and exploits they use work, relying on software written by others and distributed online. They often perform their attacks for excitement and notoriety instead of for financial gain or access to specific systems. Experienced hackers tailor their attacks to a specific system and hope to prevent the victim from discovering the hack. Script kiddies using pre-made tools lack the experience to adjust their attacks to the targeted system and lack the knowledge to cover their tracks if the hack is successful. Their hacking attempts are often indiscriminate and targeted at any vulnerable system they can find, instead of deliberately focusing on a particular system. However, a script kiddie pulling off a successful attack can still cause significant damage. Certain types of hacks are most common amongst script kiddies due to the relative ease of the attack and the availability of tools online. DDoS attacks, for example, don't require advanced skills to carry out, and many pre-made tools are readily available. Website defacement is also a common script kiddie attack, where one gains access to a web server and modifies a webpage to brag or to make a statement. Finally, while script kiddies lack advanced hacking skills, many are familiar enough with HTML and web development to create convincing fake websites for phishing attempts. After all, the easiest way for a script kiddie to gain access to a system is for them to steal a username and password to it. Updated January 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/script_kiddie Copy

Scroll Bar

Scroll Bar When the contents of a window are too large to be displayed entirely within the window, a scroll bar will appear. For example, if a Web page is too long to fit within a window, a scroll bar will show up on the right-hand side of the window, allowing you to scroll up and down the page. If the page is too wide for the window, another scroll bar will appear at the bottom of the window, allowing you to scroll to the left and right. If the window's contents fit within the current window size, the scroll bars will not appear. The scroll bar contains a slider that the user can click and drag to scroll through the window. As you may have noticed, the size of the slider may change for different windows. This is because the slider's size represents what percentage of the window's content is currently being displayed within the window. For example, a slider that takes up 75% of the scroll bar means 75% of the content fits within the current window size. A slider that fills only 10% of the scroll bar means only 10% of the window's contents are being displayed within the current window size. Therefore, if two windows are the same size, the one with the smaller slider has more content than the one with the larger slider. Most scroll bars also contain up and down or left and right arrows that allow the user to scroll in small increments by clicking the arrows. However, clicking and dragging the slider is much faster, so the arrow keys are typically not used as often. Also, some mice have a scroll wheel that allows the user to scroll by dragging the wheel instead of clicking and dragging within the scroll bar. Updated September 11, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scrollbar Copy

Scroll Wheel

Scroll Wheel Computer windows are often not large enough to display the entire contents of the window at one time. Therefore, you may need to scroll through the window to view all the contents. Traditionally, this has been done by clicking and dragging the slider within the scroll bar. However, many mice now come with scroll wheels that make the scrolling process even easier. The scroll wheel typically sits between the left and right buttons on the top of a mouse. It is raised slightly, which allows the user to easily drag the wheel up or down using the index finger. Pulling the scroll wheel towards you scrolls down the window, while pushing it away scrolls up. Most modern mice include a scroll wheel, since it eliminates the need to move the cursor to the scroll bar in order to scroll through the window. Therefore, once you get accustomed to using a scroll wheel, it can be pretty difficult to live without. Most scroll wheels only allow the user to scroll up and down. However, some programs allow the user to use a modifier key, such as Control or Shift, to change the scrolling input to left and right. Some mice even have a tilting scroll wheel that allows the user to scroll left and right. The Apple Mighty Mouse has a spherical scrolling mechanism (called a scroll ball) that allows the user to also scroll left and right and even diagonally. Whatever the case, any type of scroll wheel is certainly better than nothing. Updated September 11, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scrollwheel Copy

Scrolling

Scrolling Most computer programs display their content within a window. However, windows are often not large enough to display their entire content at once. Therefore, you may have to scroll through the window to view the rest of the contents. For example, on some monitors, a page from a word processing document may not fit within the main window when viewed at 100%. Therefore, you may have to scroll down the window to view the rest of the page. Similarly, many Web pages do not fit completely within a window and may require you to scroll both vertically and horizontally to see all the content. To scroll up or down within a window, simply click the scroll bar on the right-hand side of the window and drag the slider up or down. If the window requires horizontal scrolling as well, click the scroll bar at the bottom of the window and drag the slider to the right or left. Some computer mice also include a scroll wheel that allows you to scroll through the window by rolling the wheel back and forth. Updated September 11, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scrolling Copy

SCSI

SCSI Stands for "Small Computer System Interface," and is pronounced "scuzzy." SCSI is a computer interface used primarily for high-speed hard drives. This is because SCSI can support faster data transfer rates than the commonly used IDE storage interface. SCSI also supports daisy-chaining devices, which means several SCSI hard drives can be connected to single a SCSI interface, with little to no decrease in performance. The different types of SCSI interfaces are listed below: SCSI-1: Uses an 8-bit bus, supports data transfer speeds of 4 MBps. SCSI-2: Uses a 50-pin connector instead of a 25-pin connector, and supports multiple devices. It is one of the most commonly used SCSI standards. Data transfer speeds are typically around 5 MBps. Wide SCSI: Uses a wider cable (168 cable lines to 68 pins) to support 16-bit data transfers. Fast SCSI: Uses an 8-bit bus, but doubles the clock rate to support data transfer speeds of 10 MBps. Fast Wide SCSI: Uses a 16-bit bus and supports data transfer speeds of 20 MBps. Ultra SCSI: Uses an 8-bit bus, supports data rates of 20 MBps. SCSI-3: Uses a 16-bit bus, supports data rates of 40 MBps. Also called Ultra Wide SCSI. Ultra2 SCSI: Uses an 8-bit bus, supports data transfer speeds of 40 MBps. Wide Ultra2 SCSI: Uses a 16-bit bus, supports data transfer speeds of 80 MBps. Ultra3 SCSI: Uses a 16-bit bus, supports data transfer rates of 160 MBps. Also known as Ultra-160. Ultra-320 SCSI: Uses a 16-bit bus, supports data transfer speeds of 320 MBps. Ultra-640 SCSI: Uses a 16-bit bus, supports data transfer speeds of 640 MBps. While SCSI is still used for some high-performance equipment, newer interfaces have largely replaced SCSI in certain applications. For example, Firewire and USB 2.0 have become commonly used for connecting external hard drives. Serial ATA, or SATA, is now used as a fast interface for internal hard drives. Updated November 1, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/scsi Copy

SD

SD Stands for "Secure Digital." SD is a type of memory card used to store data in portable electronic devices. Examples include digital cameras, video recorders, smartphones, and portable music players. SD cards are considered removable storage (instead of internal or external storage), since they can be inserted and removed from a compatible device. The first SD cards were introduced in 1999. They used almost the same form factor as the existing MultiMediaCard (MMC) format, but were slightly thicker (2.1 mm vs 1.4 mm). The dimensions of a standard SD card are: 24 mm wide 32 mm tall 2.1 mm thick Each SD card has a slanted upper-right corner to ensure the card can only be inserted one way. The left side of a card has a physical slider that prevents the card from being written (read-only) when moved to the LOCK position. The "secure" part of "Secure Digital" refers to its built-in DRM protection technology. The SD format supports Content Protection for Recordable Media (CPRM), which prevents protected content from being read from another storage device. Since its introduction in 1999, the SD card format has gone through several iterations. Below are different versions of SD, listed with their storage capacity and maximum data transfer rate. SD - 2 GB capacity, 12.5 MB/sec SDHC - 32 GB capacity, 25 MB/sec SDXC - 2 TB capacity, 312 MB/sec SDUC - 128 TB capacity, 985 MB/sec Another version of SD, called microSD, was introduced in 2005. microSD cards are much smaller than SD cards, with dimensions of 11 mm x 15 mm x 1mm. Small devices like GoPros and smartphones have microSD slots to reduce the overall device size as much as possible. microSD cards can be used in standard SD card slots using a microSD-to-SD adapter. Updated April 26, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sd Copy

SDK

SDK Stands for "Software Development Kit." An SDK is a collection of software used for developing applications for a specific device or operating system. Examples of SDKs include the Windows 7 SDK, the Mac OS X SDK, and the iPhone SDK. SDKs typically include an integrated development environment (IDE), which serves as the central programming interface. The IDE may include a programming window for writing source code, a debugger for fixing program errors, and a visual editor, which allows developers to create and edit the program's graphical user interface (GUI). IDEs also include a compiler, which is used to create applications from source code files. Most SDKs contain sample code, which provides developers with example programs and libraries. These samples help developers learn how to build basic programs with the SDK, which enables them to eventually create more complex applications. SDKs also offer technical documentation, which may include tutorials and FAQs. Some SDKs may also include sample graphics, such as buttons and icons, which can be incorporated into applications. Since most companies want to encourage developers to create applications for their platform, SDKs are usually provided free of charge. Developers can simply download an SDK from a company's website and begin programming immediately. However, since each software development kit is different, it can take awhile for developers to learn how to use a new SDK. Therefore, most modern SDKs include extensive documentation and have an intuitive programming interface, which helps incentivize program development. Updated April 15, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sdk Copy

SDLC

SDLC Stands for "System Development Lifecycle." SDLC is a structured approach to creating and maintaining a system used in information technology. It can be applied to networks and online services, but is most often used in software development. When applied to software, the SDLC is also called the "application development life-cycle." Some SDLC models have as few as five stages, while others have as many as ten. A typical SDLC framework used for developing a software application might include the following seven stages: Planning - The most fundamental part of the SDLC is planning. This includes steps like determining a need for a specific program, who will be the end users, what the development will cost, and how long it will take. Defining - In this stage, the general development plan is funneled into specific criteria. The specific requirements of the program are defined. At this stage, the development team may also decide what programming language should be used to build the program. Designing - This process involves creating the user interface and determining how the program will function. For larger applications, it is common to create a design document specification (DDS), which may need to be reviewed and approved before the actual development begins. Building - The building stage typically comprises the bulk of the software development process. It includes programming the source code, creating the graphics, and compiling the assets into an executable program. Small projects may involve a single programmer, while larger projects may include multiple teams working together. For example, one team might design the user interface, while another team writes the source code. For multiplatform applications, individual teams may be assigned to different platforms. Testing - The all-important testing phase allows the developer to catch unknown issues and fix any bugs that arise in the program. Some testing may be done internally, while a beta version of the software might be provided to a select group of users for public testing. Deployment - Once a program has passed the testing phase, it is ready for deployment. In this stage, the software is released to the public. It may be provided via an electronic download or as boxed software, which comes on a CD or DVD. Maintenance - After a software application has been released, there may still be additional bugs or feature requests submitted by users. The development team must maintain the software by fixing bugs and adding new features. Commercial software programs often include some level of technical support. The reason the above stages are referred to as a cycle is because these stages are repeated each time a new major version of the software is released. While the maintenance stage may encompass minor updates, most software companies stay in business by regularly releasing paid updates (version 2, version 3, etc). Before embarking on a new major version, the development team must first create a plan (stage 1) and then continue through the other stages of the SDLC. Updated November 25, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sdlc Copy

SDN

SDN Stands for "Software-Defined Networking." SDN is a networking architecture that separates the responsibilities for routing and forwarding network data packets. A central server takes over all routing decisions for the network, instead of leaving those decisions up to each individual node. An administrator can configure and manage a network through software running on the server, instead of managing each device separately. Data centers, enterprise networks, and campuses often use SDN to manage large, complex networks. Each function of a router or switch is conceptually part of one of three planes: The data plane (or forwarding plane) is responsible for the physical forwarding of data packets to the next node toward its destination. The control plane determines the routing of those data packets, creating a path of nodes for the packet to move along. The management plane configures the control plane by giving the network administrator access to a command-line interface or control panel. In a traditional network infrastructure, each device is responsible for all three planes. In a software-defined network, each switch or node is only responsible for the data plane; the central server takes over the control and management planes. The SDN's central server handles routing decisions for all the nodes on the network An administrator of a traditional network must configure each switch (or other type of node) on that network separately. A large network may have dozens or even hundreds of switches, which leads to a lot of duplicate work when updating configuration settings on each one. Software-defined networking centralizes the control and management of the entire network into a single dedicated device. An SDN server instructs the switches and other nodes how to forward data packets using a network protocol like OpenFlow. In addition to being easier to manage, software-defined networks can move data more efficiently. Individual switches making routing decisions on their own may not choose the most efficient route, since they have limited knowledge of the conditions across the entire network. An SDN server, however, can monitor and control the entire network to optimize traffic for specific applications while bypassing congestion. Updated May 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sdn Copy

SDRAM

SDRAM Stands for "Synchronous Dynamic Random Access Memory." SDRAM is a type of computer memory that synchronizes its data transfer actions with the system clock. Since it operates on the same clock cycles as the computer's CPU, it transfers data to and from the processor more efficiently and with less latency than older asynchronous memory. SDRAM data transfer rates are measured in MHz and represent how many times per second it can read from or write to its memory cells. Earlier types of DRAM were asynchronous, which meant that the CPU would request data from the memory controller, and the memory would respond on its own time. Since the two did not share the same system clock, one component would either send a message that the other wasn't ready to receive or have to wait for a delayed reply. By synchronizing the CPU and the RAM to the same system clock cycle, both components knew exactly when to transfer data back and forth. The memory knows when to listen for a request from the CPU, and the CPU knows when the requested data will be available, so neither component is stuck waiting for the other to be ready. SDRAM modules also store data in several equally-sized memory banks and access those banks in an interleaved manner. Alternating between banks allows an SDRAM module to work faster, as the memory controller can make a request to one bank while waiting for another to fulfill the previous request. A Samsung SDR SDRAM module Nearly all computer system memory used since 2000 has been some form of SDRAM. The first generation, known as Single Data Rate (SDR) SDRAM, could perform a read or write action once per clock cycle. Subsequent generations of SDRAM can perform two actions per clock cycle, once on the rising edge of the electrical signal and once on the falling edge, and are known as Double Data Rate (DDR) SDRAM. As of 2023, DDR5 is the fastest available type of SDRAM. Updated August 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sdram Copy

SDS

SDS Stands for "Software-Defined Storage." SDS is software that manages data storage devices. It is commonly used for managing large amounts of data in data centers, server farms, and enterprise environments. SDS functions independently of the underlying hardware. It can merge several physical storage devices into one virtual storage volume. For example, five HDDs that have a storage capacity of 2 terabytes can be merged into a single 10 terabyte storage area instead of five separate volumes. Modern SDS programs use virtualization to create flexible storage solutions. A storage hypervisor monitors the devices and dynamically allocates data across them. It can also provide warnings when data storage limits are near capacity. Virtualized SDS configurations make it easy to add or remove storage devices as needed to increase or reduce storage. SDS is an important data management tool for web hosts, CDNs, and other organizations that need to manage large amounts of data. It provides an efficient way of scaling storage requirements with little to no downtime. Updated July 5, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sds Copy

SDSL

SDSL Stands for "Symmetric Digital Subscriber Line." SDSL is a type of DSL Internet connection. Like ADSL, it transfers digital data over copper telephone lines. The word "symmetric" means that the upload and download bandwidth are identical. Since data flows at the same speed in both directions, SDSL is better suited to customers that need a reliable data connection to send and receive equal amounts of data. For example, a business that needs an Internet connection for VoIP service will use both upload and download bandwidth equally. The other type of DSL connection, ADSL, is asymmetric; it allows users to download much faster than it allows them to upload. Since most home Internet users download more than they upload, most ISPs offered ADSL to their home customers instead of SDSL. Updated November 14, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sdsl Copy

Search Engine

Search Engine A search engine is a web-based software tool that helps locate information on the Internet. A search engine first creates an index of web pages using page titles, content and other metadata. When a user enters a query into a search engine, it looks for matching pages in its index, runs an algorithm to rank what it finds, then presents a list of results. This results page is also known as a Search Engine Results Page, or SERP. Search engines build their indexes using small software tools called "web crawlers" or "spiders." Spiders visit web pages, scan their contents, then follow hyperlinks to other pages. They crawl and index not only the titles and keywords of web pages but also the contents of images, videos, and other documents. They will also analyze backlinks, or links from one website to another, to determine which pages are considered more authoritative than others. Some search engines use the search history of the person running the query to weigh results differently based on what that person has clicked on in previous searches. Google is the world's most popular search engine Different search engines not only use unique methods to index pages and rank results, but they can also have distinct purposes. Google, Bing, and Yahoo are general search engines that account for the majority of worldwide searches. Some countries have their own dominant search engines, such as Baidu in China and Yandex in Russia. Other search engines, like DuckDuckGo and Brave, place a higher priority on privacy and anonymity and will not keep records of their users' search histories. Using Search Operators Search engines support a set of boolean operator words that are used to refine a search. They help narrow a search down by telling the search engine what to prioritize and what to exclude. Some common operators are listed below. Quotation marks " " around a phrase will search for those words exactly as entered. The words must be next to each other, in that order and without other words in between. The word AND (or an ampersand &) between words will search for them together. Most search engines treat this as default behavior, but it can be used with other operators. The word OR (or a vertical bar |) between two words will search for pages containing either or both. A hyphen - before a word will exclude it from the search. An asterisk * is used as a wildcard that can represent any word or multiple words. Parentheses ( ) contain a set of operators and keywords that are treated as a single object when used with other operators. The phrase site: followed by a domain will limit results to pages in that domain. For example, searching site:techterms.com search engine operators will lead you to this page. Combining several operators together can help focus a search. For example, entering (vampire OR dracula) AND "halloween costume" -twilight should show some spooky results. NOTE: Operators like AND and OR must be entered capitalized in order to modify a search. A Google search using boolean operators Updated October 4, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/searchengine Copy

Secondary Memory

Secondary Memory Secondary memory refers to storage devices, such as hard drives and solid state drives. It may also refer to removable storage media, such as USB flash drives, CDs, and DVDs. Unlike primary memory, secondary memory is not accessed directly by the CPU. Instead, data accessed from secondary memory is first loaded into RAM and is then sent to the processor. The RAM plays an important intermediate role, since it provides much faster data access speeds than secondary memory. By loading software programs and files into primary memory, computers can process data much more quickly. While secondary memory is much slower than primary memory, it typically offers far greater storage capacity. For example, a computer may have a one terabyte hard drive, but only 16 gigabytes of RAM. That means the computer has roughly 64 times more secondary memory than primary memory. Additionally, secondary memory is non-volatile, meaning it retains its data with or without electrical power. RAM, on the other hand, is erased when a computer is shut down or restarted. Therefore, secondary memory is used to store "permanent data," such as the operating system, applications, and user files. NOTE: Secondary memory may also be called "secondary storage." However, this term is a bit more ambiguous, since internal storage devices are sometimes called "primary storage devices" as well. Updated December 8, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/secondary_memory Copy

Secondary Storage

Secondary Storage Secondary storage technology refers to storage devices and storage media that are not always directly accessible by a computer. This differs from primary storage technology, such as an internal hard drive, which is constantly available. Examples of secondary storage devices include external hard drives, USB flash drives, and tape drives. These devices must be connected to a computer's external I/O ports in order to be accessed by the system. They may or may not require their own power supply. Examples of secondary storage media include recordable CDs and DVDs, floppy disks, and removable disks, such as Zip disks and Jaz disks. Each one of these types of media must be inserted into the appropriate drive in order to be read by the computer. While floppy disks and removable disks are rarely used anymore, CDs and DVDs are still a popular way to save and transfer data. Because secondary storage technology is not always accessible by a computer, it is commonly used for archival and backup purposes. If a computer stops functioning, a secondary storage device may be used to restore a recent backup to a new system. Therefore, if you use a secondary storage device to backup your data, make sure you run frequent backups and test the data on a regular basis. Updated December 31, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/secondarystorage Copy

Sector

Sector A sector is the smallest unit that can be accessed on a hard disk. Each platter, or circular disk of a hard disk is divided into tracks, which run around the disk. These tracks get longer as they move from the middle towards the outside of the disk, so there are more sectors along the tracks near the outside of the disk than the ones towards the center of disk. This variance in sectors per track is referred to as "zoned-bit recording." Large files can take up thousands of sectors on a disk. Even if one of these sectors becomes corrupted, the file will most likely be unreadable. While a disk utility program may be able to fix corrupted data, it cannot fix physical damage. Physically damaged sectors are called "bad sectors." While your computer may recognize and bypass bad sectors on your hard disk, certain bad sectors may prevent your disk from operating properly. Yet another good reason to always back up your data! Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sector Copy

Secure Boot

Secure Boot Secure Boot is a security feature that prevents malicious software (malware) from running when a PC starts up. It performs a series of checks during the boot sequence that ensures only trusted software is loaded. When Secure Boot is enabled, the firmware checks the signatures (the verified origins) of all software that loads during the boot process. If all signatures are valid, the PC loads the operating system. If one or more signatures cannot be verified, the boot process is halted, preventing any malicious software from running. Windows 11 can only be installed on Secure Boot-capable PCs with a TPM chip and UEFI support. However, it is possible to disable the Secure Boot feature. Verifying Secure Boot You can check if Secure Boot is enabled on your Windows PC by following these steps: Click the Windows icon in the task bar or press the Windows key. Type msinfo32 and press Enter. The System Information window will open. Make sure System Summary is selected in the left sidebar. Look for Secure Boot State in the right section of the window and make sure the value is On. The BIOS Mode must be set to UEFI for Secure Boot to be enabled. If it is set to "Legacy," you may be able to switch to UEFI mode by holding the F2 key during startup to load the BIOS. If your computer supports UEFI, you can switch from Legacy BIOS to UEFI mode. The same interface allows you to enable and disable Secure Boot. Neither Windows 10 nor Windows 11 require Secure Boot to be enabled. However, it is wise to leave it on to prevent unwanted software, such as ransomware or spyware, from loading at startup. Updated December 11, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/secure_boot Copy

Seed

Seed In the computer world, a seed may refer to three different things: 1) A random seed, 2) seed data, or 3) a client on a peer-to-peer network. 1. Random Seed A random seed is a value used to generate random data. Since computers are programmed to follow instructions based on specific input, creating random output is more difficult than it might seem. In order to generate a random value, another value called a "random seed" must be provided as input. This may be a timestamp (which can include milliseconds), a system value (such as a GUID), a hardware serial number, or another unique value. A random seed may be generated automatically (e.g., a timestamp) or may be explicitly passed to a randomizing function as a parameter. The function uses the seed value to produce random output, which is typically a number. 2. Seed Data Seed data is information that is loaded to enable a function or program to work correctly. If a function queries an empty database, for example, it will not produce useful output. If the database is "seeded" with data, the function will generate meaningful results. Seed data is often used for testing purposes. It may be created using an automated process or can be entered manually. 3. P2P Seed A peer-to-peer (P2P) seed is a or computer (or "peer") that uploads one or more files on a file sharing network, such as BitTorrent. Once a user downloads a complete file, he or she can share the file with other users. Peers that provide files for other torrent users to download are called seeds. Peers that download files but rarely or never upload them are called leeches. Updated October 25, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/seed Copy

Semaphore

Semaphore In computer science, a semaphore is a variable that controls access to one or more resources. It is a tool developers use to ensure functions only access valid data and run at the right time. A semaphore can prevent a deadlock or race condition by disallowing access to a resource when it is not available. There are two main types of semaphores in computer programming: binary and counting. Binary Semaphores A binary semaphore is a boolean variable that can have only two possible values (0 or 1). It is often used as a lock to restrict access to a function or resource. For example, semaphore A controls function getData(). If A = 0, getData() will not excute. If A = 1, getData() will run. A binary semaphore is also considered a flag or a switch that is either on or off. Counting Semaphores A counting semaphore can be any non-negative integer, also known as a whole number. Its value may increase or decrease based on the results of one or more functions. For example, a counting semaphore may keep track of the number of available resources in a resource pool. If the semaphore decrements more than it increments, it will eventually reach 0. A zero value indicates there are no more resources left. NOTE: A "semaphore" in the real world is a flag or other object used to signal from a distance. For example, a person may hold up a pole from a dock to signal to a ferry captain that he or she would like to come aboard. Updated February 1, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/semaphore Copy

SEO

SEO Stands for "Search Engine Optimization." SEO is a the crucial process of improving the visibility and ranking of your website in search engine results pages, also known as SERP. The ultimate goal of SEO is to boost organic traffic to your site by making it more relevant for specific search queries or keywords. This is achieved through a variety of strategies and techniques that help search engines understand and value your website's content, ultimately leading to increased visibility and potential customers. At its core, SEO involves optimizing various elements of a website. Foremost is the content itself, ensuring it provides value and meets users' needs. Next are HTML elements, such as meta tags and image alt tags, which help describe the content on the page. Technical aspects are also important, such as improving site speed, ensuring mobile-friendliness, and securing the site with HTTPS. Keyword research is foundational to SEO efforts, guiding the creation and optimization of content to meet user search intent.  It involves identifying the terms and phrases that potential customers use to find products, services, or information online.  By incorporating these keywords into website content in a natural and user-friendly manner, a site can improve its relevance and visibility for those searches. Another critical component of SEO is building high-quality backlinks from other websites. Backlinks serve as endorsements, indicating to search engines that other sites consider the content valuable and relevant. The number of inbound links can significantly impact a site's ranking in search results. SEO is not a one-time task but a continuous process of improvement and adaptation to users' evolving behavior. It requires regular analysis and adjustments to stay ahead of competitors and respond to new search engine guidelines and updates. Summary In a nutshell, Search Engine Optimization is all about making your website more attractive to search engines and users alike. By providing high-quality content, optimizing technical aspects of the site, and establishing a network of backlinks, you're not just improving your online presence but also attracting more organic traffic and achieving better rankings in search engine results. In other words, you're making your mark in the digital world. Updated May 18, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/seo Copy

Serial Number

Serial Number A serial number is a unique number or string of characters that identifies a product. While any product may have a serial number, they are especially common for electronics, such as computers, mobile devices, and audio and video equipment. Because serial numbers are unique, they can be used to identify a specific device. Therefore, when you register a product online, you will most likely be asked to enter the serial number of the product along with some other information. The serial number or "SN" links the specific device to your name or account, which is helpful for warranty purposes and for technical support requests you may have while using the product. There is no standard format for serial numbers. Instead, the style or convention is chosen by each manufacturer. Some serial numbers only include numbers, while others are alphanumeric, meaning they include both letters and numbers. It is common for serial numbers to exclude the letter "O" to avoid confusion with the number "0." Serial numbers also help manufacturers keep track of their products. For example, if a company builds the same product at multiple facilities, each location may use a different range of serial numbers. A manufacturer may also use different serial number ranges for different time periods. This helps with quality control across multiple locations and times. When a product is recalled for example, the manufacturer may provide a range of serial numbers for the affected devices. How to Find a Serial Number Serial numbers are often printed on the back or bottom of electronic devices, making them inconspicuous but also notoriously difficult to locate. The number may be printed on a label or may be engraved into the hardware itself. It is important to look for the term "Serial Number," "Ser. No.," or "SN" since there may be other numbers listed, such as the product ID, network ID, or UPC. Many electronics save the serial number permanently in the device ROM. This allows you to view it using software. For example, you can look up the serial number of a computer by viewing the computer properties (in Windows) or "About this computer" (in macOS). You can view the serial number of your smartphone by going to Settings → About. In many cases, you can can view the SN of a peripheral device by connecting it to a computer or TV with a USB or HDMI cable. NOTE: In software, the term "serial number" may also be used synonymously with "activation key." However, this has become less common in recent years. Updated July 25, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/serial_number Copy

Serial Port

Serial Port A serial port is a type of hardware interface found on personal computers from the late 1980s to the early 2000s. They were also called COM ports or RS-232 ports. Serial ports were used to connect peripherals like modems, game controllers, and mice. While generally considered obsolete in personal computing following the introduction of the USB port, serial ports are still used in embedded computers in industrial and scientific fields. Serial port connections transmit data one bit at a time in a single data stream, unlike parallel ports that transmit multiple bits simultaneously in parallel. A serial port uses different pins to send and receive data, making the connection bi-directional. Serial port connections are slower than parallel port connections, but the simpler technology means that serial port cables and connections are smaller and less expensive to implement. Multiple sizes of serial port connectors were commonplace; the most common types were 9-pin DE-9 and 25-pin DB-25 ports. Apple Macintosh computers used an 8-pin Mini-DIN connection for their serial ports. In the late 90s, the Universal Serial Bus (USB) standard arrived to replace the various serial ports (as well as the parallel port), addressing the shortcomings of the older standards. It offered higher speeds, plug-and-play compatibility, and the ability to provide more power to peripherals. Modern Uses of Serial Ports While serial ports are obsolete in personal computing, they're still commonly used in other settings. Serial ports are simple, reliable, and cheap to implement on small low-power devices with specific uses. In many specialized devices, like industrial controllers and medical equipment, a simple wired connection that sends small amounts of data is all that is required. In many cases, they are still used simply because a device was designed back when serial ports were common, but is still dependable and doesn't require a redesign and replacement. Industrial automation equipment and scientific instruments still use serial ports to exchange data and configuration settings with computers. Large-scale networking equipment also often uses serial ports for diagnostic and configuration connections. Instead of replacing otherwise-reliable equipment just to implement a newer port, administrators can instead connect to these devices using a USB-to-serial adapter. Updated December 8, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/serialport Copy

SERP

SERP Stands for "Search Engine Results Page." A SERP is the page that you see after you perform a search using a search engine. It includes a list of search results that are relevant to the search phrase or keywords you entered. Each search result typically includes the title of the page and a brief description of the page. The description may either be taken from the page's description meta tag or may contain snippets of text from the page that contain keywords from the search phrase. SERPs vary between search engines, but they all use a similar layout. The top of the page contains the search field, which typically shows the user's most recent query. The results are then displayed in order of relevance beneath the search field. Many search engines also include other search options, such as Images, Video, and News searches. You may also find an "Advanced Search" link that provides options for performing a more specific search. The results produced by the search engine's search algorithm are known as "organic results." These results are the primary content of each SERP and are displayed for every search in which results are found. Since webmasters typically want their sites to rank high on relevant SERPs, various search engine optimization SEO techniques can be used to try and improve the ranking of pages on SERPs. Depending on the search phrase entered, the search engine may also display ads or "paid listings." These links are typically displayed above the initial search results and on the right side of the page. Typically these ads are denoted by a phrase such as "Sponsored Links" or "Sponsored Results." They are supplied by advertisers who bid on keywords that are relevant to their content. Paid listings provide a way for companies to generate traffic to their websites through search engines even if their organic search ranking is not very high. Updated March 10, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/serp Copy

Server

Server A server is a computer that provides data to other computers. It may serve data to systems on a local area network (LAN) or a wide area network (WAN) over the Internet. Many types of servers exist, including web servers, mail servers, and file servers. Each type runs software specific to the purpose of the server. For example, a Web server may run Apache HTTP Server or Microsoft IIS, which both provide access to websites over the Internet. A mail server may run a program like Exim or iMail, which provides SMTP services for sending and receiving email. A file server might use Samba or the operating system's built-in file sharing services to share files over a network. While server software is specific to the type of server, the hardware is not as important. In fact, a regular desktop computers can be turned into a server by adding the appropriate software. For example, a computer connected to a home network can be designated as a file server, print server, or both. While any computer can be configured as a server, most large businesses use rack-mountable hardware designed specifically for server functionality. These systems, often 1U in size, take up minimal space and often have useful features such as LED status lights and hot-swappable hard drive bays. Multiple rack-mountable servers can be placed in a single rack and often share the same monitor and input devices. Most servers are accessed remotely using remote access software, so input devices are often not even necessary. While servers can run on different types of computers, it is important that the hardware is sufficient to support the demands of the server. For instance, a web server that runs lots of web scripts in real-time should have a fast processor and enough RAM to handle the "load" without slowing down. A file server should have one or more fast hard drives or SSDs that can read and write data quickly. Regardless of the type of server, a fast network connection is critical, since all data flows through that connection. Updated April 16, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/server Copy

Service Pack

Service Pack A service pack is a software package that contains several updates for an application or operating system. Individual updates are typically called software updates or patches. When a software company has developed several updates to a certain program or operating system, the company may release all the updates together in a service pack. Many Windows users are familiar with service packs because of the popular service pack released for Windows XP, called SP2. Windows XP SP2 not only included typical updates such as bug fixes and security updates, it added new features. Some of the features included new security tools, interface enhancements to Internet Explorer and Outlook Express, and new DirectX technologies. In fact the SP2 service pack for Windows XP was so comprehensive, many newer Windows programs require it in order to run. Service packs are usually offered as free downloads from the software developer's website. A software update program on your computer may even prompt you to download a service pack when it becomes available. Typically, it is a good idea to download and install new service packs. However, is may also be wise to wait a week or two after the service pack is released to make sure no new bugs or incompatibilities are introduced with the service pack. If you do not have a high-speed Internet connection, you can often purchase a service pack update CD for a small charge. While service packs are commonly released for Microsoft products, not all companies use them. Apple's Mac OS X, for example, uses the Software Update program to install incremental updates to the operating system. Each Mac OS X update includes several small updates to the operating system and bundled applications, much like a service pack. Updated December 30, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/servicepack Copy

Servlet

Servlet A servlet is a Java program that runs on a Web server. It is similar to an applet, but is processed on the server rather than a client's machine. Servlets are often run when the user clicks a link, submits a form, or performs another type of action on a website. Both servlets and JSP pages contain Java code that is processed by a Web server. However, servlets are primarily Java programs, while JSP pages are primarily HTML files. In other words, a servlet is a Java program that may contain HTML, while a JSP page is an HTML file that may contain Java code. Additionally, servlets require a specific structure and must contain the following three methods: init() service() destroy() The init() method initializes the servlet, allocates memory for the process, and passes any input parameters to the servlet. The service() method, which may also be specified as the doGet(), doPost(), doPut(), or doDelete() method, processes the HTTP request and typically provides a response that is sent to the client's browser. The destroy method may save data to a log file and frees up resources that were used by the servlet. Servlets are one of many options Web developers can use to create dynamic websites and process data entered by website visitors. Since they are written in Java, servlets provide an easy way for programmers who are already familiar with the Java programming language to create Web applications. Updated October 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/servlet Copy

Session

Session In the computing world, a session refers to a limited time of communication between two systems. Some sessions involve a client and a server, while other sessions involve two personal computers. A common type of client/server session is a Web or HTTP session. An HTTP session is initiated by a Web browser each time you visit a website. While each page visit constitutes an individual session, the term is often used to describe the entire time you spend on the website. For example, when you purchase an item on an ecommerce site, the entire process may be described as a session, even though you navigated through several different pages. Another example of a client/server session is an email or SMTP session. Whenever you check your email with an email client, such as Microsoft Outlook or Apple Mail, you initiate an SMTP session. This involves sending your account information to the mail server, checking for new messages, and downloading the messages from the server. Once the messages have been downloaded, the session is complete. An example of a session between two personal computers is an online chat, or instant messaging session. This type of session involves two computers, but neither system is considered a server or client. Instead, this type of communication is called a peer-to-peer or P2P. Another example of P2P communication is BitTorrent file sharing, where file downloads are comprised of one or more sessions with other computers on the BitTorrent network. A P2P session ends when the connection between two systems is terminated. Updated February 2, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/session Copy

Set

Set A set is a data type that consists of predefined values. It is similar to the ENUM data type, but a constant or variable defined as a set can store multiple values listed in the set declaration instead of just one. A set can be defined in the source code of a program or in a database table structure. For example, a set that stores different types of sports may be declared in Python as follows: set Sports { "soccer", "baseball", "tennis", "golf" }; Similarly, a column in a MySQL database table may be defined like this: Sports SET ('soccer', 'baseball', 'tennis', 'golf') A variable or database value defined as Sports can be assigned one or more of the four sports listed above. Therefore, it would make sense to create the Sports variable as a set if it is likely to contain multiple values. One example is a web form that asks a user to select sports that he or she likes to play. Since the user can choose one or more of the predefined values, the resulting data should be stored as a set. If the form were to ask the user to choose his or her favorite sport, the variable should be declared as an enum instead. Like enums, sets can be used to ensure data integrity by limiting the values a variable or database record can store. However, this also makes sets less flexible than string or float data types. Therefore sets are appropriate for data types that only contain a limited number of values. Updated November 25, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/set Copy

SharePoint

SharePoint Microsoft SharePoint is a web-based platform used for sharing files and information. It is designed for teams and provides collaboration features, such as project management, messaging, and shared document storage. SharePoint is commonly used to create a company intranet, which can only be accessed by employees. It provides a secure platform for team members to exchange files and share information. SharePoint may also be used to create an extranet, as a way for remote clients and business partners to share content securely. Since SharePoint is web-based, users can access the content from a variety of devices using a standard web browser. SharePoint sites may be hosted by Microsoft using SharePoint Online, or internally using SharePoint Server. SharePoint Online SharePoint Online is included with Microsoft Office 365 for Business. It automatically creates a team site when you create an Office Group or Team. You can use the SharePoint web interface to customize the layout of the site, add pages, and share files. Content published using SharePoint Online is hosted in the cloud (on Microsoft's servers) and can be accessed remotely by authorized users. SharePoint Server SharePoint Server is a program that runs on a local machine, such as an internal web server, within a corporate network. It provides more control over teams and groups and extra site customization. It may also add extra security if the intranet is hosted behind a local firewall. However, because SharePoint Server is hosted locally, it requires additional hardware, more administration, and manual updates. For this reason, SharePoint Server is most often used for enterprise applications that require greater customization and security. Updated June 28, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sharepoint Copy

Shareware

Shareware Shareware is software that you can use on a trial basis before paying for it. Unlike freeware, shareware often has limited functionality or may only be used for a limited time before requiring payment and registration. Once you pay for a shareware program, the program is fully functional and the time limit is removed. In the 1980s and 1990s, shareware was a popular way for small developers to distribute software. The advent of CDs allowed multiple developers to deliver their software programs as a collection, such as "Top 100 Mac Games." Other shareware collections included utilities, graphics programs, and productivity applications. In many cases, these programs were fully functional and simply requested a donation from users. Programs that incessantly reminded users to register and pay for the software became known as "nagware." Today, the most common type of shareware programs are trial programs, which are also called "trialware" or "demoware." These programs are provided as demos that you can try for a limited time, such as two weeks or one month. Once the trial period expires, you must pay for the software in order to continue using it. Most shareware demos can be downloaded directly from the software publisher's website. Updated July 9, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/shareware Copy

Shell

Shell A shell is a software program that provides an interface between the user and the operating system. Users issue commands through the shell, which interprets the commands and passes them onto the operating system to execute. While the word "shell" is most often associated with a text-based command line interface, it may also refer to an operating system's graphical user interface. A command line shell is a text-based interface where you can type commands to perform functions such as running programs, opening and navigating directories, and viewing currently-running processes. However, to execute a command, you must know its proper syntax — both the command's name and how to correctly use arguments to modify it to do what you want. Once you know how to enter commands, you can even write shell scripts to automate actions. Shells are most commonly associated with Unix, as many Unix users like to interact with the operating system using the text-based interface. Different Unix and Linux distributions include several possible shells, like the original Bourne (sh) and C (csh) shells, Bash (bash), and Z shell (zsh). Windows includes two command line shells: a basic command prompt called cmd.exe, and an advanced shell and scripting language called PowerShell. A shell may also provide a graphical user interface (GUI). These interfaces make it easier to perform basic commands by including navigable menus and clickable buttons but are often unable to include every command possible in a command line shell. Explorer is Windows' GUI shell, the macOS shell is called Finder, and Unix-like operating systems support several possible GUIs like GNOME Shell and KDE Plasma. Updated March 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/shell Copy

Shift Key

Shift Key The Shift key is a modifier key on the keyboard. Its primary purpose is to capitalize lowercase letters. When the Shift key is not being pressed, letters are entered as lowercase by default (unless Caps Lock has been activated). When a user holds down the Shift key while typing a letter, the capitalized version of the letter is entered. The Shift key also affects other keys on the keyboard. For example, you might notice that the numbers near the top of the keyboard have symbols above them. These symbols are entered when you hold the Shift key while typing the corresponding number. For example, Shift+1 enters an exclamation mark (!), while Shift+8 enters an asterisk (*). The same is true for the punctuation keys, such as period, comma, semicolon, etc. The lower symbols are entered when the Shift key is not being pressed, while the top symbols are entered when the Shift key is held down. Since Shift is a modifier key, it can also be used in conjunction with the mouse. For example, many programs will allow you to select more than one item by holding down the Shift key and clicking multiple items on the screen. It can also be used to select a section of text by clicking in one spot, holding Shift, then clicking in another spot. Finally, Shift is sometimes used for keyboard shortcuts that perform common commands. For example, Control+S is the standard "Save" shortcut, while "Control+Shift+S" is typically used for "Save As..." Updated March 27, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/shiftkey Copy

Sidebar

Sidebar In computing, a sidebar is a user interface element that displays a list of choices. It typically appears as a column to the left of the main content, though it can appear on the right side as well. Both Windows and Mac OS X desktop windows include a sidebar by default. This column, which appears on the left side of each open window contains links to common folders, such as the Documents, Music, Pictures, and Downloads folders. It provides easy access to common locations on your computer, no matter what directory you are currently viewing. Sidebars are also used by many software programs. For example, Web browsers such as Mozilla Firefox and Apple Safari use a sidebar to display bookmarks and bookmark folders. Email clients, like Microsoft Outlook and Mozilla Thunderbird display mail folders or new messages in the left sidebar, while the message content is displayed to the right. Word processors, such as Microsoft Word and Apple Pages, can be configured to display a list of all the pages in a document within the left sidebar. Websites often use sidebars for navigation purposes. These sidebars typically contain links to the main sections of a website. If the sidebar is part of the website template, it provides easy access to any section of the website no matter what page you are viewing. When the primary navigation of a website is located in a sidebar, it may also be called a navigation bar. NOTE: Windows Vista introduced a feature called the Windows Vista Sidebar, which was used to display desktop gadgets. In Windows 7, the Windows Vista Sidebar was renamed to Windows Desktop Gadgets. Updated July 26, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sidebar Copy

Sideload

Sideload Sideload is a verb that describes a local file transfer. Unlike uploading and downloading, which take place over the Internet, sideloading involves transferring a file from one local device to another. Common ways to sideload data are: through a wired connection (a USB cable) over a wireless connection (Wi-Fi or Bluetooth) from a removable storage device (an SD card) While sideloading is possible between any two local devices, it most commonly refers to transferring a file to a mobile device, such as a smartphone or tablet. Sideloading is often used synonymously with installing an app outside an app store. Sideloading Mobile Apps The standard way to install an app on an iOS or Android device is to download it from Apple's App Store or Google's Play Store. Sideloading an app bypasses the app store, installing the app directly from a local device. iOS does not allow sideloading except for developers, who need to test their apps before publishing them to the App Store. Android provides a setting in Security → Unknown Sources that, when toggled on, allows apps to be installed or "sideloaded" outside the Play Store. Updated January 12, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sideload Copy

Sierra

Sierra Sierra, also known as macOS 10.12, is the 13th version of OS X, released on September 20, 2016. It followed El Capitan (OS X 10.11) and precedes macOS High Sierra. Sierra was the first version of OS X that Apple officially labeled "macOS" instead of OS X. Like other incremental releases of Apple's desktop operating system, Sierra built on previous versions, incorporating the same user interface and adding several new features. Notably, Sierra was designed to work more seamlessly with other Apple devices, such as iPhones and Apple Watches. For example, Sierra was the first version of macOS to support Universal Clipboard, which allows you to copy something on your phone and paste it on your Mac, or vice versa. It also supports automatic login using an Apple Watch. Safari, the web browser included with macOS Sierra, supports Apple Pay using an iPhone or Apple Watch. Sierra also increased iCloud integration by syncing the Desktop and Documents folders with iCloud drive. This makes it easier to keep files consistent across multiple Macs. It was also the first version of Apple's desktop OS to include Siri, Apple's voice assistant that was introduced in iOS. Siri can be accessed by clicking the icon in the right side of the menu bar or by pressing and holding Command+Space Bar. You can speak commands, such as "Show my Downloads folder" or "Make the screen brighter," instead of performing the actions with your keyboard and mouse. NOTE: Before Mac OS X (introduced in 2001), Apple's desktop operating system was called "Mac OS." With the release of Mountain Lion (OS X 10.8) in 2012, Apple changed the name from Mac OS X to just "OS X." The switch to "macOS" with Sierra marks a return to the "Mac OS" name, while using the same naming convention as Apple's other operating systems – iOS, watchOS, and tvOS. Updated July 25, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sierra Copy

Silent Install

Silent Install A silent install is the installation of a software program that requires no user interaction. It is a convenient way to streamline the installation process of a desktop application. Silent installs are performed by many legitimate software programs, but they are also used by malware and PUPs to hide the installation process from the user. A typical installer has several parameters that instruct the installer how to run. Examples include where the program should be installed, if a shortcut should be placed on desktop, and if additional components should be installed. In a non-silent or "attended installation," the user is prompted to select or confirm these options during the installation process. In a silent install, these items are selected automatically and the installer runs from start to finish without requiring any user input. Silent installs are useful for simple programs that have limited installation options. They are also helpful for installing software on several machines at once. For example, a network administrator may prefer to distribute a software program via a silent installer to ensure all users within the network have the same installation settings. Even though silent installers run without any user interaction, legitimate programs typically require you to manually initiate the installation process. While silent installations are used for many good reasons, some programs, such as spyware and adware use silent installers to run without your knowledge. These types of programs may even run without you initiating the installation process. They may be tacked on to another installer or executed by a malicious file, such as virus. Updated April 24, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/silent_install Copy

SIM Card

SIM Card Stands for "Subscriber Identification Module Card." A SIM card is a small removable chip that identifies a mobile device on a cellular network. It contains an integrated circuit that stores a unique identifier called an "international mobile subscriber identity" (IMSI) number and other information specific to the mobile carrier. A SIM card or an embedded SIM is required in order for any cell phone, smartphone, or tablet to be used on a cellular network. When you activate a cell phone, the cellular provider links your phone number to your SIM card, which allows you to make receive calls and access cellular data. If you replace your SIM card or a get a new phone with a new SIM card, the new SIM identifier must be linked to your account in order for your mobile device to be recognized on the network. SIM cards have been standardized in increasingly smaller sizes since the first SIM card was used in 1991. Below are several SIM card formats. Full size (1991) - 85.6 x 53.98 mm x 0.76 mm Mini-SIM (1996) - 25 x 15 mm x 0.76 mm Micro-SIM (2003) - 15 x 12 mm x 0.76 mm Nano-SIM (2012) - 12.3 x 8.8 mm x 0.67 mm Cellular devices are designed to work with a specific SIM card format so if you need to replace your SIM card, it is important to verify the correct size with your cellular provider. Removing a SIM card is usually pretty simple. Some devices, like the iPhone, have a SIM card tray that pops out from the side of the device. You can open the tray by pressing the access hole with a paperclip. Other devices, such as the Samsung Galaxy, require you to remove the back cover and battery to access the SIM card. You can remove and replace the SIM card without powering down your device. Updated July 24, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sim_card Copy

SIMM

SIMM Stands for "Single In-Line Memory Module." A SIMM is a small circuit board that holds memory chips, and is also known as a RAM stick. A SIMM is installed into a slot on a computer motherboard, allowing the RAM configuration in a computer to be customized. The amount of RAM a SIMM contains depends on the number of memory chips on it, as well as the capacity of each of those chips. Before the introduction of SIMMs, computer memory was installed one chip at a time into sockets on a motherboard. SIMMs were developed to make it easier to install a larger amount of memory at once, and to save space on the motherboard. Early SIMMs had 30 contact pins, and supported an 8-bit data bus. Later SIMMs increased the pin count to 72, added an off-center notch to guide installation, and utilized a 32-bit data bus to the CPU. Once computers began using a 64-bit bus, SIMMs had to be installed in matching pairs in order to work properly. A Kingston 72-pin SIMM, showing the off-center installation notch SIMMs were eventually replaced as the standard memory module by DIMMs, which supported a 64-bit data bus. While a SIMM has one set of electrical contact pins that are identical on both sides of the module, each side of a DIMM has separate pins. DIMMs could also be used one at a time, instead of requiring matching pairs. Updated October 3, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/simm Copy

Simplex

Simplex Simplex is a type of communication in which data can only be transmitted in one direction. It is often used in contrast to duplex communication, in which data can flow bidirectionally (back and forth) between two devices. Broadcasts, in which a single transmission is sent to many users, is a common type of simplex communication. For instance, radio broadcasts, television broadcasts, and Internet streaming are all examples of simplex transmissions. Since these mediums only need to transmit, rather than receive data, simplex communication is all that is necessary. While simplex transmissions are sufficient for broadcasting, nearly all computer network protocols and computer interfaces are duplex, meaning they can both send and receive data. Some computer components, however, may include "simplex circuits," in which data can only flow in one direction. Updated April 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/simplex Copy

SIP

SIP Stands for "Session Initiation Protocol." SIP is a protocol defined by the Internet Engineering Task Force (IETF). It is used for establishing sessions between two or more telecommunications devices over the Internet. SIP has many applications, such as initiating video conferences, file transfers, instant messaging sessions, and multiplayer games. However, it is most well known for establishing voice and video calls over the Internet. VoIP companies, such as Vonage, Phone Power, and others, use SIP to provide Internet-based telephone services. This system, called "SIP trunking" allows clients to communicate over standard phone lines using IP phones or computers with VoIP software installed. A SIP server provides the translation from the VoIP connection to the public switched telephone network (PSTN). Similar to HTTP, SIP uses simple request and response messages to initiate sessions. For example, the INVITE request message is used to invite a user to begin a session and ACK confirms the user has received the request. The response code 180 (Ringing) means the user is being alerted of the call and 200 (OK) indicates the request was successful. Once a session has been established, BYE is used to end the communication. While SIP codes are not always seen by users, they can be useful when troubleshooting unreliable connections. NOTE: SIP may also stand for "Standard Interchange Protocol," which is a library system communication standard developed by 3M. SIP and SIP2 were designed to handle basic inventory operations, such as checking in and checking out library books. Both versions of the Standard Interchange Protocol have been largely replaced by the National Information Standards Organization Circulation Interchange Protocol (NCIP). Updated June 15, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sip Copy

Sitemap

Sitemap A sitemap (sometimes spelled "site map") is an overview of the webpages on a website. It typically includes a link to every page on the website, or at least those pages the webmaster considers important. A sitemap can help visitors to a website find a specific page while also providing web crawlers with an outline they can use to index it efficiently. There are several kinds of sitemap, each serving a different role in website development and management. First, web developers often use sitemap diagrams when designing a website to lay out its structure, visualizing how to create links between pages and organize pages into directories. Later, the developer can use these sitemaps' structures to create human-readable sitemap HTML pages that present visitors with a structured outline of the entire site, with hyperlinks to sections and specific pages. Finally, they can make a sitemap file (designed for SEO purposes) that lists a site's pages in a programmatic and structured way. The sitemap page for Microsoft's website, which displays links to important pages organized by category Web crawlers access a site's sitemap file to know what URLs they need to index, which helps make crawls faster so that pages appear in search engine results sooner. Sitemap files may be as simple as a text file listing the URL for each page, but many sites now use XML documents based on Google's Sitemap protocol. The Sitemap protocol defines XML schema that includes optional metadata like when the page was last updated and its priority relative to other pages on the site. CMS systems often generate a sitemap automatically, updating it whenever new content is published, although some webmasters prefer to update their sitemaps manually. Updated September 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sitemap Copy

Skin

Skin A skin is a file or package that changes the appearance of a program or operating system's interface. Skins provide a graphical user interface (GUI) overhaul for an application or operating system, allowing users to customize the look and feel of software on their device. Some applications use the term "theme" instead of skin, often to imply subtler changes rather than a full GUI transformation. Since building in support for skinning takes extra development effort, relatively few applications support it. The use of skins that completely redesign an interface is generally limited to media players like WinAmp and VLC. Support for simpler, customizable themes is more common, including in web browsers like Google Chrome. Applications that support themes often include built-in theme galleries that let you browse and apply themes to change colors, fonts, and icons. The Android operating system allows device manufacturers to include a skin that changes the appearance of the entire interface and introduces new functionality. However, these skins are not easily customizable by the device's owner. Changing an Android device's skin requires you to root the device or install separate software not included by the manufacturer. Updated March 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/skin Copy

SKU

SKU Stands for "Stock Keeping Unit," and is conveniently pronounced "skew." A SKU is a number or string of alpha and numeric characters that uniquely identify a product. For this reason, SKUs are often called part numbers, product numbers, and product identifiers. SKUs may be a universal number such as a UPC code or supplier part number or may be a unique identifier used by a specific a store or online retailer. For example, one company may use the 10 character identifier supplied by the manufacturer as the SKU of an external hard drive. Another company may use a proprietary 6-digit number as the SKU to identify the part. Many retailers use their own SKU numbers to label products so they can track their inventory using their own custom database system. When shopping online or at retail stores, knowing a product's SKU can help you locate the exact product at a later time. It will help you identify a unique product when there are many similar options, such as a TV model that comes in different colors, sizes, etc. If you know a product's SKU, you can typically locate the product online by typing the SKU in the online retailer's search box. If you visit a retail store and have questions about product you saw in an ad, knowing the SKU will help the salesperson find the exact product you are asking about. SKUs are typically listed in small print below the product name and are often preceded by the words "SKU," "Part Number," "Product ID," or something similar. Updated November 28, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sku Copy

Skyscraper

Skyscraper While not as common as the banner ad, the skyscraper is another prevalent form of Web advertising. Skyscraper ads, which are tall and narrow, get their name from the tall buildings you often see in big cities. They are typically placed to the left or right of the main content on the page, which allows all or part of the ad to be visible even as the user scrolls down the window. Skyscraper ads come in two standard sizes -- 120 pixels wide by 600 pixels high (120x600) and 160 pixels wide by 600 pixels high (160x600). They can contain text advertisements, images, or animations. When users click on a skyscraper ad, they are redirected to the advertiser's website. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/skyscraper Copy

SLA

SLA In the computer world, an "SLA" may refer to either 1) a software license agreement, or 2) a service level agreement. 1. Software License Agreement A software license agreement (also called an end user license agreement or EULA) is a contract between the software developer and the user. While boxed software used to come with a hard copy of the SLA, most software license agreements are now distributed digitally. For example, the software license agreement is usually displayed on your screen during the software installation process. After scrolling through an SLA, you will typically see a button that says, "I Agree" or "I Accept." By clicking the button, you agree to the terms of the SLA and therefore are allowed to install and use the software. Most software license agreements are rather long and contain a lot of legal terms. The SLA may include license limitations, such as how many people can use the software and how many systems the software may be installed on. Most SLAs also include disclaimers that state the developer is not liable for problems caused by the software. While these disclaimers are often several paragraphs in length, they are basically saying, "Use at your own risk." 2. Service Level Agreement A service level agreement is a contract between a service provider and a user. In most cases, the agreement is between a business and a consumer, though SLAs may be established between two business as well. In either case, the SLA defines specific services that are guaranteed over a given amount of time, often for a specific price. A common example of a service level agreement is a computer warranty that covers the parts and labor of a computer. The warranty may state that the manufacturer agrees to cover any service costs due to failed components for one or more years. Other SLA examples include monthly contracts with ISPs, or annual contracts with cell phone companies. In both instances, the companies agree to provide reliable services to customers as long as they pay their monthly subscription fee. Updated July 24, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sla Copy

Slashdot

Slashdot When a website experiences a sudden and overwhelming surge in traffic due to a mention on a high-traffic site, the server may become completely unreachable. This phenomenon is called being "slashdotted." The term originated in October 1998 after a press release was published on Slashdot.org, a popular technology news website. The publication caused a significant spike in traffic to another server, leading to its crash. While the term "slashdotting" was common in the late 1990s and early 2000s, it is rarely used today. Modern web servers are better equipped to handle traffic spikes, making it rare for a site to become unreachable. Updated July 5, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/slashdot Copy

SLI

SLI Stands for "Scalable Link Interface." SLI is a technology developed by NVIDIA that allows multiple graphics cards to work together in a single computer system. This enables faster graphics performance than what is possible with a single card. For example, using SLI to link two cards together may offer up to twice the performance of a single video card. If each card has two GPUs, the result may be up to four times the performance of a typical video card! For video cards to be linked using NVIDIA's SLI system, the computer must have multiple PCI Express slots. PCI Express is the first video card interface that allows the linking of multiple graphics cards because the slots share the same bus. Previous technologies, such as PCI and AGP, used separate buses, which did not allow graphics cards to be bridged together. The PCI Express slots must also support enough bandwidth for the cards, which typically means they must be x8 or x16 slots. Of course, the cards themselves must also support SLI bridging in order to work together. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sli Copy

SMART

SMART Stands for "Self-Monitoring Analysis and Reporting Technology." SMART is a monitoring system that is built into hard disk drives and solid-state drives that detects and reports errors that may lead to drive failure. By providing a warning when errors are first detected, it can help prevent data loss by encouraging backing up and replacing a disk before it fails. SMART monitors more than 80 attributes of a disk drive, such as its temperature, the time it takes to spin up, or the number of read or write errors that have been recorded. The manufacturer of a disk drive programs the SMART system with a threshold value for each attribute, so that SMART can provide an alert when one is outside of the normal range. A SMART warning can appear while a computer is booting (during POST), or as a notification through the computer's operating system. A disk drive with a SMART system can perform two types of self test—a short Disk Self Test (DST), or a long DST. A computer runs a short DST on each disk when it boots, and the disk can still be used while the test runs. A long DST can be performed by disk management software built into the BIOS, the operating system, or provided by the manufacturer. A disk cannot be used during a Long DST, since it is a very resource-intensive process, but it is wise to run one periodically to check a disk's health. Updated September 30, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/smart Copy

Smart Connect

Smart Connect Smart Connect is a feature on wireless routers and access points that broadcasts a single SSID over both 2.4 GHz and 5 GHz frequencies. It allows a Wi-Fi router to automatically assign wireless devices to the ideal frequency and channel and reassign devices when necessary. Wi-Fi networks can operate over two separate radio frequencies: 2.4 GHz and 5 GHz. 2.4 GHz networks can operate over a longer range than 5 GHz, but 5 GHz networks have higher data transfer rates. Without Smart Connect enabled, a router needs to use a separate SSID for each frequency. Your computers and mobile devices will need to connect to and save each SSID separately if you want them to use both. A device will typically choose the network with the stronger signal when it tries to connect to a Wi-Fi network and will remain on that frequency until you disconnect it, regardless of whether the other one is stronger later. With Smart Connect enabled, computers and devices only need to connect to and save a single SSID. When a device first connects to the wireless network, the router assigns it to the frequency and channel with the strongest connection at the time. As it moves around, and other devices use network bandwidth, the router will transfer it to the other frequency when necessary to maintain a strong connection. This allows the router to spread traffic across all available frequencies instead of letting one become overloaded while the other is idle. Updated March 2, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/smart_connect Copy

Smart Home

Smart Home A smart home is a house or other dwelling with automated or remotely controlled components. Some "smart" components require a proprietary interface though most smart home features can be controlled by a mobile device or computer. While there are no technical requirements for a smart home, a basic smart home might have remote controlled lighting and an automated thermostat. If your home has a smart lighting system, for example, you can configure different lights to turn on during the day and automatically switch off all the lights late at night. A smart thermostat can be configured to keep the house warm during the day and cool at night. If you leave your house for several days, you can set the thermostat to "vacation mode," which will reduce the energy usage of your HVAC system. Advanced smart homes may include several other "smart" components. For example, smart blinds can shut to keep the house cool or open to allow more heat in through the windows. Some blinds can be programmed to open slowly in the morning to help you wake up to natural lighting. A smart security system can monitor suspicious activity and sound an alarm or contact the police if necessary. It may also provide convenient features such as automatically unlocking the front door and turning on lights when you pull into the driveway. A smart home may also include "smart appliances" that you can monitor and control remotely. For example, you can check if you remembered to run the dishwasher before you left the house and turn it on from your mobile app if you forgot. If you have a smart oven, you can turn it on while you are on the road so that your food will be cooked when you get home. A smart refrigerator can detect the items it contains and let you know when you need to get more milk when you're at the grocery store, for example. Several different manufacturers make smart components and appliances. While it is ideal to design a smart home from the ground up using a single brand of components, in many cases that isn't possible. Therefore, when adding smart components to your home, you may have to get accustomed to multiple interfaces. Fortunately, most smart components come with easy-to-use apps that run on both iOS and Android devices. Updated July 19, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smart_home Copy

Smartphone

Smartphone A smartphone is a mobile phone that includes advanced functionality beyond making phone calls and sending text messages. Most smartphones have the capability to display photos, play videos, check and send e-mail, and surf the Web. Modern smartphones, such as the iPhone and Android based phones can run third-party applications, which provides limitless functionality. While smartphones were initially used mostly by business users, they have become a common choice for consumers as well. Thanks to advancements in technology, modern smartphones are smaller and cheaper than earlier devices. Users also have a much wider range of smartphones to choose from than before. While the RIM Blackberry dominated the smartphone market for many years, other manufacturers like Apple, HTC, and Samsung now offer a wide variety of smartphone options as well. This increase in smartphone availability has led to a corresponding decline in the usage of standard PDAs, which do not include phone capabilities. Since smartphones have a wide range of functionality, they require advanced software, similar to a computer operating system. The smartphone software handles phone calls, runs applications, and provides configuration options for the user. Most smartphones include a USB connection, which allows users to sync data with their computers and update their smartphone software. Updated July 30, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smartphone Copy

Smartwatch

Smartwatch A smartwatch is a digital watch that provides many other features besides timekeeping. Examples include monitoring your heart rate, tracking your activity, and providing reminders throughout the day. Like a smartphone, a smartwatch has a touchscreen display, which allows you to perform actions by tapping or swiping on the screen. Modern smartwatches include several apps, similar to apps for smartphones and tablets. These apps provide additional functionality, such as displaying weather information, listing stock prices, and displaying maps and directions. Most smartwatches can also be used to make phone calls and send and receive text messages. While these apps run directly on the smartwatch, they require a smartphone to function. This is because the data is first received by the phone, then sent to watch. Most smartwatches do not include Wi-Fi and they do not have a SIM card for cellular data. Therefore, most apps rely on a compatible smartphone to provide data over a Bluetooth connection. For example, the text messaging app on your smartwatch may allow you to dictate and send a text message, but the actual message is sent using your phone. If your watch is not within range of your phone's Bluetooth signal, the message will not be sent. Since smartwatches rely on smartphones for a large percentage of their functionality, they are generally considered a smartphone accessory rather than a standalone device. Still, smartwatches provide a number of features that don't require a smartphone. For example, activity tracking is possible using the smartwatch's built-in accelerometer and heart rate monitor. A smartwatch with a GPS receiver can accurately track and record outdoor runs. If your watch has an NFC (near field communication) chip, you can pay for purchases with your watch using a stored credit card. Finally, if your watch has enough storage for music files, you can play songs directly from your watch using wireless headphones. Popular smartwatches include the Apple Watch (watchOS), Samsung Gear (Android Wear), and LG Watch (Android Wear). Updated August 5, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smartwatch Copy

SMB

SMB Stands for "Server Message Block." SMB is a network communication protocol for file and printer sharing over a local or wide-area network. File servers use SMB to create shared folders that other computers can browse over the network just as easily as if they were on the computer's local disk. Only Windows-based computers use SMB sharing by default, but other operating systems can enable SMB support through software implementations like Samba. The SMB protocol uses the client-server model to exchange data between computers. The client first requests access to a shared resource using the server's hostname or IP address. The server evaluates that request, then asks the client for authentication using a username and password. Once authenticated, the client can request specific files or other resources through additional messages to the server. The server uses SMB to break the files into packets for transfer, reassembling them on the client's disk. A folder shared using SMB in Windows 10 There have been several updates to the SMB protocol since its original implementation, offering increased security, reliability, and performance. The original version, SMB 1.0, was created to add network file sharing to DOS in 1983 and was later included in Windows. SMB 2.0 was released in 2006, adding stronger authentication while improving performance and reducing network overhead. SMB 3.0 came in 2012, adding end-to-end encryption for file transfers for increased security. Updated May 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/smb Copy

Smishing

Smishing Smishing is a combination of the terms "SMS" and "phishing." It is similar to phishing, but refers to fraudulent messages sent over SMS (text messaging) rather than email. The goal of smishing is to capture people's personal information. In order to do this, "smishers" send out mass text messages designed to capture the recipients' attention. Some messages may be threatening, e.g., "Visit this URL to avoid being charged $5.00 per day," while others may provide a fake incentive, such as "You have won a free gift card, visit this website to claim your prize." If you click on a link in the text message, you will be directed to a fraudulent website that will ask you to enter your personal information, such as your name, address, phone number, and email address. In some cases, a smishing website will ask you to enter your bank account information or social security number. Smishing has become increasingly common now that smartphones are widely used. Many smartphones allow you to simply click on a link in a text message to view the website in your phone's browser. This makes text messages an effective "bait" for luring unsuspecting users to fraudulent websites. Therefore, just like when you receive email spam, is best to not visit websites mentioned in text messages from unknown sources. Updated May 15, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smishing Copy

SMM

SMM Stands for "Social Media Marketing." SMM refers to marketing done through social media or social networking websites. While most companies and organizations have their own websites, it can be difficult to reach users who do not already know about the organization. Therefore, many organizations have found it useful to also develop a presence on "Web 2.0" websites, such as Facebook, LinkedIn, and Twitter as well. Social media marketing provides a low cost way for businesses to reach large numbers of users and gain brand recognition. Since social networking websites already have large established online communities, businesses and organizations can gain exposure by simply joining these websites. Organizations can create custom social media profiles, then build their own communities within these sites by adding users as friends or followers. Many companies attract users by posting frequent updates and providing special offers through their social media profile pages. While SMM is a powerful online marketing tool, it is typically used to supplement other online marketing methods rather than replace them. Since just about any company or business and join a social networking website, it can be difficult to stand out from the crowd. Therefore, most companies still rely on Web advertising and search engine optimization SEO to generate traffic to the their websites. Updated August 24, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smm Copy

SMP

SMP Stands for "Symmetric Multiprocessing." SMP is a type of multiprocessing in which all processors may be used at the same time. It is sometimes contrasted with asymmetric multiprocessing, which delegates each processor to a specific task. The vast majority of computers with multiple processors (or processing cores) support SMP. It is implemented at the operating system level, which distributes computing tasks across two or more processors. Modern OSes, such as Windows, macOS, and Unix have supported SMP since the late 1990s. Most software applications benefit automatically from SMP, thanks to the underlying OS support. However, developers may still optimize apps to use multiple processors even more effectively. For example, a video editing program might send specific rendering operations to different processors, which may be more efficient than letting the OS balance the processing load. How SMP Works SMP requires multiple homogenous processors, meaning each processor has the same architecture and clock speed. All processors share the same system bus and system memory, so no processor is prioritized over the others. The OS distributes computing operations across the processors (or processing cores), attempting to use them all equally. SMP can substantially improve a computer's performance compared to a single-processor system. However, since individual threads are sent to different processors, there is overhead in distributing the tasks. Additionally, not all computing operations can be split across multiple processors. Therefore, SMP does not increase performance by 100% for each additional processor. Updated December 30, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/smp Copy

SMS

SMS Stands for "Short Message Service." SMS is a communications technology that sends text messages to mobile phones. SMS messages are short, up to 160 characters long when using alphanumeric characters, or up to 70 characters if the message includes special symbols or accented letters. They are transmitted across mobile phone networks and addressed to phones using the same phone number as voice calls. SMS technology was first created for phones on GSM mobile networks in the early 1990s. The convenience of sending a short text message instead of making a phone call helped it catch on quickly, even though mobile phones of the time lacked proper keyboards — users had to type their messages using a numerical keypad. SMS is a store-and-forward technology, so if a phone is offline or out of a network's range, the carrier holds on to incoming messages until the phone reconnects to the network. In addition to mobile-to-mobile messaging, SMS allows computer systems connected to a mobile network to send automated messages. These services can use either a normal phone number or a unique short code — a 5- or 6-digit number assigned by a mobile carrier. They can send messages in response to an SMS prompt, an online request, or without any prompt at all. For example, a two-factor authentication service can use SMS to send you a login code, while a mobile payment processor can use it to send you a transaction receipt. Other services may send you appointment reminders, marketing messages, or news updates. While SMS is limited to sending short, text-only messages, other technologies based on it add more features. MMS (Multimedia Messaging Service) is an extension of SMS that lets you send images, video, and audio files. RCS (Rich Communication Services) adds even more features to SMS, including delivery and read receipts and a much higher character limit. However, other messaging services built on IP networks like WhatsApp and Apple's iMessage are now more popular than basic SMS. These allow you to send messages from a smartphone, tablet, or computer through any Internet connection instead of relying strictly on a mobile phone network. Updated October 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sms Copy

SMTP

SMTP Stands for "Simple Mail Transfer Protocol." SMTP is the protocol used to send email over the Internet. Email clients (like Microsoft Outlook or the macOS Mail app) use SMTP to connect to a mail server and send email messages. Mail servers also use SMTP to exchange messages from one mail server to another. It is not used to download email messages from a server; instead, the IMAP and POP3 protocols retrieve messages. An email client first establishes an SMTP connection over TCP with the mail server. The client then pushes the message to the server, which checks the recipient's email address to see if they're on the same domain as the sender; if they are, it can deliver the message. Otherwise, it runs a DNS query to find the IP address of the recipient's mail server. The mail server then establishes another SMTP connection with the destination mail server to send the message. When configuring your email client, you may be required to specify separate mail servers for incoming and outgoing mail. While some email services use the same server for sending and receiving mail (e.g., mail.example.com), other services use separate servers for the two tasks (e.g., imap.example.com and smtp.example.com). NOTE: Modern email clients and mail servers use a modified version of SMTP called Extended SMTP, or ESMTP. This updated version includes support for email attachments and TLS security. Updated February 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/smtp Copy

Snake Case

Snake Case Snake case (or "snake_case") is a naming convention that replaces spaces in compound words with underscores (_). It is commonly used by software developers for writing method and variable names in their source code, as well as when naming files used by their projects. While snake_case typically uses all lower-case letters, a version known as SCREAMING_SNAKE_CASE uses all capital letters. In most programming languages, the names of methods, variables, and other elements cannot contain spaces. Developers use naming conventions like snake_case, camelCase, and PascalCase to make compound names readable while avoiding compilation errors. For example, the variable name my_first_variable is easier to read than myfirstvariable. The underscores in snake_case recall a snake slithering on the ground In many cases, it is up to the developer's preference which convention to use, but some programming languages have style guides that specify certain conventions for certain uses. For example, Python's style guide recommends a mix of several naming conventions for different uses: Classes: PascalCase — class UserInfo Functions: snake_case — print_message() Methods: snake_case — draw_image() Variables: snake_case — new_user_name Constants: SCREAMING_SNAKE_CASE — MAX_VALUE Other programming languages that use snake_case for variables, functions, and methods include Ruby, Rust, PHP, and Perl. Shell scripting languages, like Bash, also often use snake_case for variables and functions. Updated May 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/snake_case Copy

Snapchat

Snapchat Snapchat is a mobile app and service for sharing photos, videos, and messages with other people. Once you view a message received via Snapchat, it is automatically deleted. This makes the service ideal for sharing quick updates with friends without accumulating media or messages on your mobile device. The Snapchat app is available for iOS and Android devices. Once you download and install the app, you can create an account and add friends. You can then take a "snap" and send it to one or more of the people in your friends list. You can also use Snapchat to send quick text messages that disappear once the recipient reads them. To take a photo in Snapchat, simply tap the capture button while the camera is active. To take a video, hold down the button for a few seconds to record a short clip. Once you've captured your photo or video, you can swipe right or left to apply filters or add other effects. You can tap the "T" icon in the upper right to add text and tap the pencil icon to select a color and draw on your snap before sending it. If you're taking a selfie, you can press and hold on your face before capturing the shot. This will allow the app to detect your face and you can apply fun effects to your selfie before sending it. Snapchat allows you to send snaps directly to specific friends or share them with all your friends by adding them to your "Story." After you capture a photo or video, you can tap the "+" icon near the bottom of the screen to add the snap to your story. Your friends can view the images and videos you've added for 24 hours after you publish them. You can also view your friends' stories by swiping to the Stories screen within the app. Updated February 21, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/snapchat Copy

Snapshot

Snapshot A snapshot is a record of a disk or volume's contents at a specific point in time. Taking a snapshot preserves the state of files, folders, and the overall file system. Each snapshot is incremental and saves the changes from the previous snapshot, which helps a computer user roll data back to an earlier state quickly. A snapshot is not a backup, but snapshots can be a valuable part of a data backup and recovery strategy. Taking a snapshot of a disk volume does not require the time and workflow interruption that making a full backup does, since a snapshot does not actually create a copy of data elsewhere. Instead, taking a snapshot freezes the volume as it is, and any new data (as well as changes to existing data) gets added as new data blocks outside the snapshot. Data blocks within each snapshot are read-only, so each subsequent snapshot needs to preserve only the data blocks added since the previous one. An APFS volume's snapshots listed in the macOS Disk Utility app Rolling back to an earlier version of the volume only requires the system to discard or ignore every data block added after the end of the chosen snapshot. Over time, the system removes old snapshots by merging snapshots together, following the system's retention policy that may limit the number, age, or total size of the volume's snapshots. For example, one common policy is to maintain daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a year. Snapshots are not replacements for external data backups. A snapshot can help restore a deleted file or roll back to an older version of a file, but only an external backup can preserve data if the original disk is damaged. Some file systems natively support full file-system snapshots using basic file-system commands, including ZFS and APFS. Other file systems support snapshots only through software utilities; for example, NTFS and ReFS volumes support snapshots through a Windows service called Volume Shadow Copy Service (VSS). Many storage array systems and NAS devices also support volume snapshots, which administrators can configure through their management applications. Updated December 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/snapshot Copy

Snippet

Snippet A snippet is a small section of text or source code that can be inserted into the code of a program or Web page. Snippets provide an easy way to implement commonly used code or functions into a larger section of code. Instead of rewriting the same code over and over again, a programmer can save the code as a snippet and simply drag and drop the snippet wherever it is needed. By using snippets, programmers and Web developers can also organize common code sections into categories, creating a cleaner development environment. Snippets used in software programming often contain one or more functions written in C, Java, or another programming language. For example, a programmer may create a basic "mouse-down event" snippet to play an action each time the user clicks a mouse button. Other snippets might be used to perform "Open file" and "Save file" operations. Some programmers also use plain text snippets to comment code, such as adding developer information at the beginning of each source file. In Web development, snippets often contain HTML code. An HTML snippet might be used to insert a formatted table, a Web form, or a block of text. CSS snippets may be used to apply formatting to a Web page. Web scripting snippets written in JavaScript, PHP, or another scripting language may be used to streamline dynamic Web page development. For example, a PHP snippet that contains database connection information can be inserted into each page that accesses information from a database. Whether programming software or developing websites, using snippets can save the developer a lot of time. Updated January 15, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/snippet Copy

SNMP

SNMP Stands for "Simple Network Management Protocol." SNMP is a protocol for exchanging management messages between network devices. It helps network administrators remotely monitor and configure devices like switches, routers, servers, and printers. SNMP is common on enterprise-level networking equipment but is rare on consumer-level hardware. SNMP creates a client-server relationship between network devices (which run a piece of server software called an SNMP agent) and the management tools used by network administrators. An SNMP agent keeps track of configuration and performance data about its device and stores that information in a type of database called a Management Information Base (MIB); every object in that database, like a router's uptime or its QoS settings, is given an Object Identifier (OID) for identification. When an SNMP management tool connects to an agent, it can send messages to that agent requesting information and writing configuration settings. SNMP is used by Business and Enterprise networking equipment like the Cisco Business 350 switch The SNMP protocol uses seven standard message operations. The Get command requests the value of a specific object by specifying its OID. The GetNext command gets the next available object value after the specified OID, and the GetBulk command retrieves values from a range of OIDs. Response messages return data from the SNMP agent to the SNMP manager following a Get request. The Set command allows the manager to change an object's value to update a configuration setting. The Trap command creates a notification that alerts the manager when a specific condition on the network device is met; the Inform command also creates a notification but instructs the SNMP manager to acknowledge when the message is delivered successfully. The current version of SNMP is SNMP v3, which improves on the shortcomings of earlier versions. SNMP v1 was simple but also insecure; it sent all SNMP messages (including those containing passwords) as plain text, which allowed packet sniffers to intercept data. SNMP v2 added the GetBulk command to make it more efficient to retrieve many variables at once. SNMP v3 added an authentication system and supports the encryption of SNMP messages using either MD5 or SHA hashing algorithms. Updated October 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/snmp Copy

Snow Leopard

Snow Leopard Snow Leopard is another name for Mac OS X 10.6, which was released on August 26, 2009. It followed Mac OS X 10.5 Leopard and preceded the release of Mac OS X 10.7 Lion. Unlike Leopard, Mac OS X Snow Leopard did not include hundreds of new features. Instead, the update was primarily designed to improve the performance and efficiency of Mac OS X. Snow Leopard was the first Mac operating system to require an Intel-based Mac, which means it cannot be used on older Macintosh computers. However, by removing the PowerPC code from the operating system, Apple was able to shrink the size of the operating system by roughly 7 gigabytes compared to Leopard and improve the speed of common operations. While Snow Leopard was not a feature-oriented release, it did include a few notable additions. For example, Snow Leopard was the first version of Mac OS X to include autocorrect support, which automatically fixes common typos. Additionally, Snow Leopard added Microsoft Exchange support to Mail, Address Book, and iCal, which is important for Macs that are used in Windows-based organizations. Mac OS 10.6 also shipped with several new versions of Apple software, including Safari 4, iChat 5, and QuickTime X, a completely rewritten version of QuickTime. The final Snow Leopard software update was 10.6.8, released on July 25, 2011. Updated October 24, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/snow_leopard Copy

SO-DIMM

SO-DIMM Stands for "Small Outline Dual In-line Memory Module." A SO-DIMM is a compact memory module roughly half the size of a full-size DIMM module. Like a DIMM, a SO-DIM is a small circuit board containing memory chips that serve as a computer's RAM. Laptop computers and some compact desktops use SO-DIMM modules, as do upgradeable office printers, routers, and NAS devices. Despite their small size, SO-DIMMs are generally available in similar capacities as full-size DIMMs and typically only cost slightly more due to the extra cost of miniaturizing components. Since laptop computers are designed to be compact, they don't have the physical space necessary for full-sized DIMM modules. Instead, they use compact SO-DIMM modules designed to lay flat against the laptop's motherboard (instead of sticking up perpendicular to the motherboard, as in desktop computers). Some laptops make their RAM user-accessible by providing a small door or access panel, allowing you to replace old modules with new, higher-capacity ones. A Crucial DDR4 SO-DIMM module Like full-size DIMMs, each new generation of SO-DIMMs changes the pin count and notch position to prevent incompatible memory from being installed. The earliest SO-DIMM modules only used 72 pins to connect to the laptop's motherboard, but DDR SO-DIMMs use nearly as many pins as their full-size counterparts — DDR1 and DDR2 modules use 200 pins, DDR3 modules use 204, DDR4 uses 260, and DDR5 SO-DIMM modules use 262 pins. Updated November 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sodimm Copy

SOA

SOA Stands for "Service Oriented Architecture." When businesses grow, they often add new products and services. While these additions may help make the business larger, it is often difficult to implement them in an efficient manner. The goal of SOA is to make it easy for businesses to grow and add new services. The Service Oriented Architecture is based on components that work seamlessly with each other. For example, a company sells clothing through an online store. After a few months of successful sales, the company decides to add a jewelry department. An SOA component conveniently adds a new section to the store, in this case, specifically for jewelry. The company then wants to add new shipping options. A SOA shipping component makes adding the new shipping options as easy as checking a few boxes in an administrative control panel. Initially, the company only offered e-mail support, but later decides that adding phone support would be beneficial. A phone support component allows the phone representatives to look up customer orders the same way the e-mail support specialists could. Basically, SOA makes it possible for a business to add new features and services without having to create them from scratch. Instead, they can be added or modified as needed, making it simple and efficient to expand the business. Because many products and services are now offered via the Web, most SOA solutions include Web-based implementations. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/soa Copy

SOAP

SOAP Stands for "Simple Object Access Protocol." SOAP is a protocol used in web services to exchange structured information over the internet in a standardized format. It provides a way for applications to communicate with each other, regardless of their operating system or programming language. SOAP is an XML-based protocol, which makes it highly versatile and widely supported. It uses XML to encode its messages, ensuring the message format is understandable on any platform that supports XML. SOAP typically transmits messages over HTTP, although they can also be transmitted over other protocols. The HTTP transport mechanism leverages the widespread availability and use of HTTP infrastructure, making SOAP messages capable of passing through firewalls and other network barriers without requiring special tunneling techniques. A key feature of SOAP is support for WS-Security, an extension that adds security features to SOAP web services. It provides mechanisms for encrypting the message content and ensuring the integrity and confidentiality of the messages exchanged. Such security measures are crucial for applications that deal with sensitive information or require secure communications, making SOAP a preferred choice in enterprise environments and industries like banking and healthcare. Summary SOAP's flexibility has helped it become a cornerstone of web services and enterprise-level applications where security, reliability, and formal contracts are paramount. Its standardized use of XML ensures that SOAP-based services can operate across different environments and systems, facilitating seamless interoperability and communication. Updated May 22, 2024 by Nils-Henrik G. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/soap Copy

SoC

SoC Stands for "System On a Chip." An SoC (pronounced "S-O-C") is an integrated circuit that contains all the required circuitry and components of an electronic system on a single chip. It can be contrasted with a traditional computer system, which is comprised of many distinct components. A desktop computer, for example, may have a CPU, video card, and sound card that are connected by different buses on the motherboard. An SoC combines these components into a single chip. The primary advantage of a system on a chip is the reduction of physical space required for the system. By merging multiple components together, SoCs can be used to create fully functional systems that are a fraction of the the size of their traditional counterparts. Examples include smartphones, tablets, and wearable devices, such as smartwatches. A smartwatch SoC, for example, may include a primary CPU, graphics processor, DAC, ADC, flash memory, and voltage regulator. All of these components fit on a single chip roughly the size of a quarter. The size of an SoC is also its greatest disadvantage. Since all the components are compressed into a single integrated circuit, they are limited in their storage capacity and processing power. For example, a high-end desktop computer with a dedicated graphics card and SSD will easily outperform a smartphone from the same generation. However, advances in mobile processing technology enable modern smartphones to provide similar performance to high-end computers from only a few years ago. Updated November 12, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/soc Copy

Social Engineering

Social Engineering Social engineering, in the context of computer security, refers to tricking people into divulging personal information or other confidential data. It is an umbrella term that includes phishing, pharming, and other types of manipulation. While "social engineering" may sound innocuous (since it is similar to social networking), it refers specifically to malicious acts and is a topic all Internet users should understand. Unlike hacking, social engineering relies more on trickery and psychological manipulation than technical knowledge. For example, a malicious user may send you a "phishing" email that says you need to reset your username and password for a specific website. The email may appear to be legitimate, but if you click the link in the message, it may direct you to a fake website that captures your information. Another common type of social engineering uses false alerts on websites. For example, when you open a webpage, you might receive a message saying your computer has a virus and you need to download a specific program or call a phone number to fix it. In most cases, these alerts are auto-generated and are completely false. If you follow the instructions in the alert message, you may end up downloading spyware or giving away personal information over the phone. Social engineering may also take place through social media. For example, malicious users may post public messages on sites like Facebook and Twitter that lure people into sharing personal information. Common example include false giveaways and prize alerts. In some cases, social engineers will even build relationships with others using online chat or private messaging before convincing them to divulge confidential data. While most Internet users do not harbor malicious intent, social engineering is an unfortunate reality of the Internet. Therefore, it is wise to be skeptical of any message, email, or website that asks you to share personal data — especially if the request is from an unknown source. You can often verify the legitimacy of a message by checking the domain name of the website or contacting the author of the message. If you cannot verify the origin of a request, do not follow the instructions. By recognizing fake messages on the Internet, you can avoid being a victim of social engineering. Updated August 20, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/social_engineering Copy

Social Media

Social Media Social media refers to online activity where people interact with each other by sharing photos, videos, blog posts, articles, and other user-generated content (UGC). People may share social media content with their friends on a social network like Facebook or with a broader audience of subscribers on a large social media platform like Youtube, Instagram, or TikTok. Each social media platform works differently, but most share some common traits. First, a person creates an account on the platform's website or app. They're encouraged to build a social network of friends and followed profiles on the platform to curate the feed of content they see whenever they visit the site or open the app. "Friending" and "following" are both ways to see someone's social media posts, but with a significant difference. Friending someone is a mutual relationship where both people see the other's posts; following is a one-sided arrangement where the follower sees content from the person they're following but not vice versa. Some platforms, like Twitter, only support follower relationships — in these cases, following each other has the same effect as friending. The social media content feed on most platforms is generated by an algorithm, with some platforms allowing you to switch between an algorithmic feed and a purely-chronological one. These feeds combine content from friends, followed accounts, and content they may like based on their interests and social media activity. They can curate their feed further by liking content that appeals to them; some platforms also allow users to dislike content to see less of certain things. Everyone should be mindful of their privacy settings when sharing on social media. Sharing an image or text post will initially only reach your friends and followers, but if a post receives enough engagement — an industry term for interactions including likes, comments, and people sharing it to their timeline — it may spread to other people by appearing in their algorithmic feeds. Known as "going viral," this can lead to the person posting the content gaining lots of followers, but they may also receive a lot of unwanted attention. Some social media platforms allow a person to limit what they share to only friends or followers to keep some control over the spread of their posts. Updated December 15, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/social_media Copy

Social Networking

Social Networking Social networking is the activity of connecting with other people, often on social media sites, to create new relationships and build existing ones. Websites dedicated to social networking allow people to create profiles, connect with other users, exchange messages, and share information. Social networking activity may happen on large, general-interest social media sites (like Facebook) or on smaller social networking sites with a narrower focus that serve as the foundation for an online community. Social networking sites exist for people in a specific field, with shared interests and hobbies, or in need of support for a particular problem. When someone signs up for a social networking site, they first create a profile. In most cases, they may remain anonymous, but by providing some background information they can make it easier to connect with other people who share their interests or have similar backgrounds. Once someone has created their account and profile, they can view the profiles of other users and connect to them (often using the terms "friend" or "contact"). Other features common to social networks include private messaging between people, sharing content (like images, links, and text-based posts) with their friends, and posting comments on the content shared by others. Relationships on social networking sites can take multiple forms. "Friending" creates a one-to-one mutual relationship, while "following" someone creates a one-sided connection to see their posts without them necessarily following back. Relationships built on social networking sites may remain entirely online, but many people use them to establish offline relationships. For example, LinkedIn is a popular business-focused social networking site where many people make contacts that help them find a new job or advance their careers; dating sites are also considered a form of social networking. NOTE: The terms "social networking" and "social media" are often used synonymously with each other, but they mean different aspects of a larger concept. Social networking refers to making connections between the users of an online community, often fostering one-to-one relationships and discussions. Social media instead refers to user-generated content created and shared online, often aimed at a large group of followers. Sites focused on social media content (rather than social networking relationships) often have a primary focus on algorithms designed to get people to follow accounts they may be interested in, with less emphasis on making friends with other users. Updated December 15, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/social_networking Copy

Socket

Socket A socket is a software component that helps computer programs communicate over a network. The term "socket" evokes a physical socket that you plug a networking cable into and through which all data travels. Applications establish software networking sockets for a similar purpose — all data an application sends out and receives over a network connection passes through its socket. Sockets provide a standard method for a program to establish network connections. Software developers can use an API to create a socket managed by the operating system so that they don't need to write their own way from scratch. When the application needs to send data out over the network, it writes that data to the socket as if it were writing a file to a disk. The operating system detects this and automatically forwards the data packet out over the network and to its destination. A socket address combines an IP address and a Port number Each socket has a unique socket address that combines an IP address and a port number. This address identifies both the specific computer the socket is running on and the application that created it. Applications running on a server create sockets when they launch and maintain them in a listening state, while applications on clients can open and close sockets as needed. While a socket is open, the operating system monitors it for incoming and outgoing data packets and looks at each packet's socket address to know where to forward it. For example, when you visit a website with your web browser, it creates a socket that connects to another one on a remote web server using its IP address and port number (80 for HTTP connections and 443 for HTTPS connections). The browser sends a request for a specific webpage to this socket, which the operating system detects and forwards to the web server. When the web server sends the requested page back to your computer, the operating system looks at the socket address and knows to send it back to the web browser. Updated November 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/socket Copy

Soft Copy

Soft Copy A soft copy is a digital version of a document, image, or other data. It can be displayed and read on a computer, tablet, or mobile device screen. Soft copies can be opened and edited by software programs. Soft copies are often compared to hard copies that are printed on paper. Unlike a hard copy, which is permanent, a soft copy is digital and can be edited at any time. It can also be deleted by accident, which makes having a safe backup (as well as a printed hard copy) a good idea in many cases. Soft copies of a document can be easily transmitted by email, over a network or the Internet, or on removable media like flash drives and external hard drives. Many businesses transmit a soft copy of a document to employees and other organizations whenever possible, saving both the time and expense of printing enough copies for everyone and sending them by mail or courier. There are some instances where a hard copy is required, particularly with legal and tax documents, but many people find it far more convenient to use a soft copy of a document whenever possible. Updated November 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/softcopy Copy

Soft Token

Soft Token A soft token, or software token, is a piece of software that authenticates a user as part of a multi-factor authentication system. The most common form of soft token is an authenticator app, which can generate one-time-use passwords that add a level of security beyond a regular username and password. They are similar to hard tokens but exist as software on a device instead of a piece of dedicated hardware. Soft tokens allow you to use a smartphone (or other device) as a part of a multi-factor authentication process. First, the authentication server generates a secret key for you to enter into your authenticator app. Once the secret key is in place, the authenticator app can generate one-time-use passwords, which are typically valid for 30 to 60 seconds each. The authenticator app generates passwords using the same algorithm as the authentication server; as long as the clocks on the server and the authenticator app are in sync, the passwords will match and authenticate the user. While both hard and soft tokens add an extra layer of security to the authentication process, soft tokens do so with a little more convenience to the user. Hard tokens are small devices you need to carry with you and are useless if you forget them, but soft tokens live on the smartphone you're already likely carrying. An authenticator app may also utilize the smartphone's biometric security features like fingerprint scans or facial recognition, adding another level of protection. Finally, if a smartphone with your soft token is lost or stolen, your administrator can revoke the existing key and replace it with a new one that you can add to a new device instead of waiting for a replacement hard token. Updated June 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/soft_token Copy

Software

Software Software is a catch-all term for the programs and applications that run on a computer. Software programs act as the instructions that tell a computer's physical components, known as its hardware, what to do. It is made by software developers, who write the underlying source code in one of many programming languages before compiling software into executable files that a computer can understand. Software is virtual, with no tangible physical element. It exists solely as digital data on a computer's storage disk and is loaded into system memory when executed. Software typically accepts some form of input from the user, performs some function or process based on that input, then returns output for the user's benefit. Since it is digital data, software is easier to modify or upgrade than hardware. There are two main categories of software: system software and application software. System software directly controls the hardware, consisting of the computer's operating system, BIOS, and device drivers. It provides a platform on which application software can run. Application software provides extra utility beyond the computer's basic operations and helps a computer's user accomplish their tasks. Word processors, image editors, media players, games, web browsers, and email clients are examples of application software. Word processors and games are two types of software You can install new software to customize your computer's capabilities to meet your needs. Installing software puts all of a program's files and resources in the right place and tells the operating system that it is available to open certain file types. Historically, you would install software from physical media, like floppy disks, CD-ROMs, or DVD-ROMs. However, most software is now distributed online through App Stores and other software downloads. Software is made available to users through some form of use license. Commercial software requires payment in exchange for the software, either through a one-time purchase or a subscription. Free software is available without payment, supporting the developer through voluntary donations or in-app advertisements. Open source software is also typically available for free along with its source code, allowing anyone to contribute by improving it — adding new features, squishing bugs, or improving performance. Updated June 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/software Copy

Solid State

Solid State In the context of electronic devices, "solid state" refers to a component that has no moving parts. Solid-state components use solid materials like silicon to carry electronic signals, as opposed to the vacuum tubes and other mechanical components used by early electronics. The term is often used specifically to refer to solid-state flash storage drives (SSDs), in contrast to mechanical hard disk drives. Many early computer storage devices were not solid-state devices, as they relied on moving parts to store data. For example, a hard disk drive stores data on spinning magnetic platters that are read by mechanical read/write heads; similarly, an optical drive reads data from a spinning disc using a movable laser. If a mechanical drive experiences a sudden movement while reading or writing data, it could skip and cause a read error or even permanent damage to the read/write heads and data platters. Solid-state drives use flash memory and have no moving parts that could be affected by sudden movements. This makes them far more durable and ideal for portable devices. All modern electronics include solid-state components like transistors, diodes, and integrated circuits. Many computer components are entirely solid-state without any moving mechanical parts at all, including processors, memory chips, motherboards, and expansion cards. Smartphones, tablets, and laptop computers typically only include solid-state components — aside from accessory components like cooling fans, display hinges, keyboard keys, and power and volume buttons. Updated October 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/solidstate Copy

Sonoma

Sonoma macOS Sonoma (macOS 14) is the 20th version of the macOS operating system. It follows macOS Ventura (macOS 13) and was released to the public on September 26, 2023. macOS 14 Sonoma adds several new features to the operating system and several bundled applications. It maintains a similar look and feel to previous macOS versions with a few tweaks to bring it closer to iOS. It supports the same interactive widgets as iOS, and you can now place them on the desktop instead of hiding them in the notification center. You can add widgets for any app installed on your iPhone or iPad and do not need to have the app installed on your Mac. Sonoma also includes a redesigned lock screen that closely resembles the one in iOS and iPadOS. The Safari web browser also receives several noticeable new features. You can now create browsing profiles with separate bookmarks, cookies, and browsing history, keeping your personal web browsing separate from your work or school browsing. You can pin web apps directly to the dock, which will open in a simplified window instead of a full browser window. You can also securely share saved passwords and passkeys with family members, and when a password is updated, the changes will sync automatically. macOS Sonoma desktop with widgets and multiple Safari profiles Other new features to the operating system and bundled apps include: Improved autocorrect and predictive text system-wide, using a new machine learning language model. A new Presenter overlay in the FaceTime app that shows your face in the corner when sharing your screen. New search filters in the Messages app that make it easier to find text within specific conversations, as well as a Catch-Up feature to quickly jump to the first unread message in a long thread. Facial recognition in the Photos app has been expanded to recognize cats and dogs, and will automatically generate photo albums of your pets. A new Game Mode that optimizes CPU and GPU performance for 3D graphics and reduces input latency while playing games. Like previous versions of macOS, Sonoma drops support for several older models of Intel Macs. It supports the iMac Pro released in 2017, MacBook Pro, MacBook Air, and Mac Mini models released in 2018 or later, iMac and Mac Pro models released in 2019 or later, and all models of the Mac Studio. Updated October 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sonoma Copy

Sound Card

Sound Card A sound card is a hardware component in a computer that provides an interface for audio input and output. A typical sound card includes at least two 3.5mm audio jacks — one for analog stereo output and one for line-in or microphone input. It uses a digital-to-analog converter (DAC) to convert digital audio into analog signals for playback through speakers and headphones; it also includes an analog-to-digital converter (ADC) to digitize analog input from microphones. It may also include an interface for digital audio output, typically using a Toslink optical connector. Many computer motherboards integrate a sound card directly, providing a set of 3.5mm audio jacks alongside the rest of the ports on the I/O panel. Dedicated cards that plug into a computer's PCIe expansion slot can add more input and output connections, improve audio processing performance, and improve audio quality when paired with high-end speakers. External sound cards that connect to a computer over USB, also known as audio interfaces, can include larger ports than would fit on an internal card's backplate, like ¼-inch audio inputs and XLR jacks for musical instruments and high-end microphones. A Creative SoundBlaster sound card with multiple audio input and output jacks, including a digital Toslink connector For most computer users, a motherboard's integrated sound card is enough to use with stereo speakers, headphones, or a headset with a microphone. However, there are several reasons that you may want a dedicated sound card. They often include a higher-quality DAC, which can produce cleaner sound with less noise. They may also provide better amplification to improve sound quality when used with high-impedance headphones. Dedicated sound cards often support higher resolutions and sampling rates (up to 32-bit / 384 KHz), and may even include hardware encoding and decoding for surround sound formats like Dolby Digital and DTS. NOTE: High-end sound cards often include breakout boxes. These devices connect to the sound card over a digital interface and provide more input and output jacks, a volume control knob, and a headphone amplifier. Updated October 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/soundcard Copy

Source Code

Source Code Source code is text written in a programming language that contains a program's instructions. Source code files are human-readable plain text files that include variable declarations, functions, loops, and other statements that tell a program what to do. Some programs only need a few lines of source code, while others may consist of millions of lines. Software developers usually follow several common conventions when writing source code to make complex code easier to read and modify. For example, developers should name variables in a way that describes their purpose, and also use a consistent method for indenting code blocks that clearly separate functions. Developers also typically include comments to explain sections of code. Comments help other programmers (and even the original programmer in the future) understand what each code element does. Short programs in some programming languages are known as scripts; these programs may be run directly from the source code using a scripting engine, like VBScript or PHP engine. However, most programming languages require that a developer compiles a program's source code before it can run. Compiling translates source code into binary machine code, stored as executable files that the computer can understand and run. A program's source code may be kept in a single file or distributed across many interlinked files. Breaking source code into several files helps keep a program's functions organized by their purpose. Many programmers use an Integrated Development Environment (IDE) to write and manage their source code files. An IDE typically includes a code editor, a version control system, a runtime environment, a debugger, and a compiler. These tools make it easier for a developer to write code, test it, find and fix bugs, and compile it for distribution within a single program. Updated March 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/source_code Copy

Southbridge

Southbridge The southbridge is a chip that connects the northbridge to other components inside the computer, including hard drives, network connections, USB and Firewire devices, the system clock, and standard PCI cards. The southbridge sends and receives data from the CPU through the northbridge chip, which is connected directly to the computer's processor. Since the southbridge is not connected directly to the CPU, it does not have to run as fast as the northbridge chip. However, it processes data from more components, so it must be able to multitask well. On Intel systems, the southbridge is also referred to as the I/O Controller Hub, since it controls the input and output devices. Updated September 9, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/southbridge Copy

Spam

Spam Spam is a term for any unsolicited bulk message. Spam is usually junk email, but it can also take the form of text messages, phone calls, or social media messages. Many email services include an automatic spam filter that detects spam and sends it to a junk folder instead of the inbox. The goal of spam is ultimately to get someone to open a message and entice them to buy a product or service, which may or may not be a fraud. Some spammers instead try to trick users into infecting their computers with a virus to steal valuable information or extort money from the victim. Spammers also send bulk emails to conduct phishing scams. Spam filters attempt to identify and block spam by analyzing a message's content and metadata. The most basic filters will look for certain trigger words in an email. A link to a website used in previous spam emails can also cause a message to get marked as spam. Frequent spammers will also be blacklisted by their IP addresses to prevent them from reaching inboxes. History of the Term The term comes from the Hormel canned meat product and a 1970 sketch on the Monty Python television show. In the sketch, the word is said more than 130 times in 3½ minutes, which inspired some early Internet users to repeatedly post the word "spam" to newsgroup message boards and drown out any other discussion. This unproductive exercise was termed "spamming." Once unsolicited bulk commercial messages started being posted to newsgroups and arriving in email inboxes, Internet users applied the term to that practice as well. Updated October 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/spam Copy

Speakers

Speakers Speakers are one of the most common output devices used with computer systems. Some speakers are designed to work specifically with computers, while others can be hooked up to any type of sound system. Regardless of their design, the purpose of speakers is to produce audio output that can be heard by the listener. Speakers are transducers that convert electromagnetic waves into sound waves. The speakers receive audio input from a device such as a computer or an audio receiver. This input may be either in analog or digital form. Analog speakers simply amplify the analog electromagnetic waves into sound waves. Since sound waves are produced in analog form, digital speakers must first convert the digital input to an analog signal, then generate the sound waves. The sound produced by speakers is defined by frequency and amplitude. The frequency determines how high or low the pitch of the sound is. For example, a soprano singer's voice produces high frequency sound waves, while a bass guitar or kick drum generates sounds in the low frequency range. A speaker system's ability to accurately reproduce sound frequencies is a good indicator of how clear the audio will be. Many speakers include multiple speaker cones for different frequency ranges, which helps produce more accurate sounds for each range. Two-way speakers typically have a tweeter and a mid-range speaker, while three-way speakers have a tweeter, mid-range speaker, and subwoofer. Amplitude, or loudness, is determined by the change in air pressure created by the speakers' sound waves. Therefore, when you crank up your speakers, you are actually increasing the air pressure of the sound waves they produce. Since the signal produced by some audio sources is not very high (like a computer's sound card), it may need to be amplified by the speakers. Therefore, most external computer speakers are amplified, meaning they use electricity to amplify the signal. Speakers that can amplify the sound input are often called active speakers. You can usually tell if a speaker is active if it has a volume control or can be plugged into an electrical outlet. Speakers that don't have any internal amplification are called passive speakers. Since these speakers don't amplify the audio signal, they require a high level of audio input, which may be produced by an audio amplifier. Speakers typically come in pairs, which allows them to produce stereo sound. This means the left and right speakers transmit audio on two completely separate channels. By using two speakers, music sounds much more natural since our ears are used to hearing sounds from the left and right at the same time. Surround systems may include four to seven speakers (plus a subwoofer), which creates an even more realistic experience. Updated February 27, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/speakers Copy

Speech Recognition

Speech Recognition Speech recognition is the capability of an electronic device to understand spoken words. A microphone records a person's voice and the hardware converts the signal from analog sound waves to digital audio. The audio data is then processed by software, which interprets the sound as individual words. A common type of speech recognition is "speech-to-text" or "dictation" software, such as Dragon Naturally Speaking, which outputs text as you speak. While you can buy speech recognition programs, modern versions of the Macintosh and Windows operating systems include a built-in dictation feature. This capability allows you to record text as well as perform basic system commands. In Windows, some programs support speech recognition automatically while others do not. You can enable speech recognition for all applications by selecting All Programs → Accessories → Ease of Access → Windows Speech Recognition and clicking "Enable dictation everywhere." In OS X, you can enable dictation in the "Dictation & Speech" system preference pane. Simply check the "On" button next to Dictation to turn on the speech-to-text capability. To start dictating in a supported program, select Edit → Start Dictation. You can also view and edit spoken commands in OS X by opening the "Accessibility" system preference pane and selecting "Speakable Items." Another type of speech recognition is interactive speech, which is common on mobile devices, such as smartphones and tablets. Both iOS and Android devices allow you to speak to your phone and receive a verbal response. The iOS version is called "Siri," and serves as a personal assistant. You can ask Siri to save a reminder on your phone, tell you the weather forecast, give you directions, or answer many other questions. This type of speech recognition is considered a natural user interface (or NUI), since it responds naturally to your spoken input. While many speech recognition systems only support English, some speech recognition software supports multiple languages. This requires a unique dictionary for each language and extra algorithms to understand and process different accents. Some dictation systems, such as Dragon Naturally Speaking, can be trained to understand your voice and will adapt over time to understand you more accurately. Updated January 10, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/speech_recognition Copy

Spellcheck

Spellcheck Spellcheck is a feature included with various operating systems and applications that checks text for spelling errors. Some spellcheckers must be run manually, while others check each word as you type. The first spellcheckers were included with word processors, such as Microsoft Word and Corel WordPerfect. They worked by checking each word in a text document against a list of words in the program's "dictionary." If no match was found, the spellchecker would stop and display an alert saying the word was misspelled. Modern spellcheckers work in a similar way, but include several advancements. For example, most spellcheckers now allow you to customize the dictionary by adding words so they won't be flagged as misspellings in the future. Microsoft Word's spellchecker checks grammar as well as spelling. For instance, it may tell you to use a singular pronoun rather than a plural one or suggest that you use "which" instead of "that" after a comma in a sentence. Most spellcheckers now perform spellchecks in real-time, underlining misspelled words (typically in red) as you type. OS X will often correct common spelling mistakes automatically (for example, changing "teh" to "the"). You can also right-click a misspelled word in OS X and select the correct spelling from a list of options. If the word is spelled correctly, you can choose "Learn Spelling" to add the word to the OS X dictionary. Mobile devices, such as smartphones and tablets take spellchecking to a whole new level with autocorrect and autocomplete. For example, iOS and Android automatically correct misspellings by guessing what letters you meant to press on the keyboard. Microsoft Windows Phone provides spelling suggestions as you type, which allows you to complete words by simply selecting them from a list. NOTE: Spellcheck may be used as both a noun (performing a spellcheck) and a verb (spellchecking a document). Ironically, there is no official spelling of the term, as "spell-check" and "spell check" are also acceptable. However, "spellcheck" is most common. Updated January 8, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spellcheck Copy

SPF

SPF Stands for "Sender Policy Framework." SPF is a email authentication system designed to prevent email spoofing. It works by verifying that an email message is sent from an authorized IP address. SPF is commonly used alongside DKIM, another email verification technology, though they are not dependent on each other. In order for SPF verification to take place, the sender policy framework must be configured on the outgoing mail server. This involves turning on SPF and creating SPF record. The SPF record includes one or more IP addresses that are authorized to send mail for a specific domain name. A website admin tool like cPanel will automatically generate an SPF record when the service is enabled in the Email → Authentication control panel. Records can also be created manually. Below is an example of a valid SPF record with two IP addresses. v=spf1 +a +mx +ip4:12.34.56.78 +ip4:12.34.56.79 ~all The v variable at the beginning of the string is the version. a means "pass" if the IP address has an A record in the domain's zone file. mx means "pass" if the IP address is one of the MX hosts listed in the DNS The ip4 means "pass" if the IP address matches the corresponding IPv4 address. Finally, ~all means "soft fail" if the information cannot be verified. The possible results of an SPF check are: Pass Fail SoftFail Neutral None TempError PermError Generally, the only type of error that will cause a message to be rejected is a "Fail" response. PermError, TempError, and SoftFail may also cause a message to be rejected, depending on the receiving mail server's settings. In most cases, a message with a SoftFail response will still be delivered, but it may have a higher spam score than a message that passes the check. This might cause a mail client to label the message as junk. Emails that pass SPF verification are less likely to be marked as spam, increasing the deliverability of legitimate messages. NOTE: Like DKIM, you can typically see the results of the SPF check by viewing the headers in an email message. Updated January 7, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spf Copy

Spider

Spider A spider is a software program that travels the Web (hence the name "spider"), locating and indexing websites for search engines. All the major search engines, such as Google and Yahoo!, use spiders to build and update their indexes. These programs constantly browse the Web, traveling from one hyperlink to another. For example, when a spider visits a website's home page, there may be 30 links on the page. The spider will follow each of the links, adding all the pages it finds to the search engine's index. Of course, the new pages that the spider finds may also have links, which the spider continues to follow. Some of these links may point to pages within the same website (internal links), while others may lead to different sites (external links). The external links will cause the spider to jump to new sites, indexing even more pages. Because of the interwoven nature of website links, spiders often return to websites that have already been indexed. This allows search engines to keep track of how many external pages link to each page. Usually, the more incoming links a page has, the higher it will be ranked in search engine results. Spiders not only find new pages and keep track of links, they also track changes to each page, helping search engine indexes stay up to date. Spiders are also called robots and crawlers, which may be preferable for those who are not fond of arachnids. The word "spider" can also be used as a verb, such as "That search engine finally spidered my website last week." Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spider Copy

Spoofing

Spoofing The word "spoof" means to hoax, trick, or deceive. Therefore, in the IT world, spoofing refers tricking or deceiving computer systems or other computer users. This is typically done by hiding one's identity or faking the identity of another user on the Internet. Spoofing can take place on the Internet in several different ways. One common method is through e-mail. E-mail spoofing involves sending messages from a bogus e-mail address or faking the e-mail address of another user. Fortunately, most e-mail servers have security features that prevent unauthorized users from sending messages. However, spammers often send spam messages from their own SMTP, which allows them to use fake e-mail addresses. Therefore, it is possible to receive e-mail from an address that is not the actual address of the person sending the message. Another way spoofing takes place on the Internet is via IP spoofing. This involves masking the IP address of a certain computer system. By hiding or faking a computer's IP address, it is difficult for other systems to determine where the computer is transmitting data from. Because IP spoofing makes it difficult to track the source of a transmission, it is often used in denial-of-service attacks that overload a server. This may cause the server to either crash or become unresponsive to legitimate requests. Fortunately, software security systems have been developed that can identify denial-of-service attacks and block their transmissions. Finally, spoofing can be done by simply faking an identity, such as an online username. For example, when posting on an Web discussion board, a user may pretend he is the representative for a certain company, when he actually has no association with the organization. In online chat rooms, users may fake their age, gender, and location. While the Internet is a great place to communicate with others, it can also be an easy place to fake an identity. Therefore, always make sure you know who you are communicating with before giving out private information. Updated November 13, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spoofing Copy

Spool

Spool A spool is a temporary storage area within the computer's RAM that contains input or output data. When a job or process is initiated on a computer, but cannot be run immediately, it is often placed in a spool. This process is called spooling. The spool holds the data until the appropriate device is ready to use it. The most common type of spool is a print spool, which stores print jobs that are sent to a printer. Since the printer may need to warm up before printing a document, the document may be held in the spool during the printer's warm up process. If there are multiple documents that have been sent to the printer, the print spool may contain a queue of jobs. You can think of a print spool as a spool of yarn that is connected from the computer to the printer. Of course, the print spool contains computer data, rather than yarn. As the the data is transmitted to the printer, it is removed from the computer, just like unraveling a spool. Once the spool is empty, no more data is transmitted to the printer. Updated December 16, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spool Copy

Spooling

Spooling Spooling is the process of sending data to a spool, or temporary storage area in the computer's memory. This data may contain files or processes. Like a spool of thread, the data can build up within the spool as multiple files or jobs are sent to it. However, unlike a spool of thread, the first jobs sent to the spool are the first ones to be processed (FIFO, not LIFO). The most common type of spooling is print spooling, where print jobs are sent to a print spool before being transmitted to the printer. For example, when you print a document from within an application, the document data is spooled to a temporary storage area while the printer warms up. As soon as the printer is ready to print the document, the data is sent from the spool to the printer and the document is printed. Print spooling gets its name from technology used in the 1960s when print jobs were stored on large reels of magnetic tape. The data from these reels was physically spooled to electrostatic printers, which printed the output saved to the tape. Updated December 16, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spooling Copy

Spreadsheet

Spreadsheet A spreadsheet is a document that stores data in a table of rows and columns. A spreadsheet assigns each column a letter (A-Z at first, then AA, AB, AC, etc.) and each row a number (1, 2, 3, etc.). It identifies each cell by its column and row (A1, B4, D8, etc.). A cell can store a unique value consisting of text and numbers, or even a formula that refers to the contents of other cells. It is common to use the first row and/or column of a spreadsheet for labels describing the contents of the rest of the column or row. For example, a spreadsheet tracking customer information may keep each customer as a single row, with each column recording a piece of data about that customer, like their name, address, and phone number. When used this way, a spreadsheet is similar to a database. A spreadsheet with cell G3 selected, containing a formula multiplying the contents of E3 and F3 Spreadsheet cells may also contain formulas that reference the contents of other cells and perform calculations. Formulas may reference individual cells (C3) or cell ranges (B2:D48). Calculations also update automatically whenever a referenced value changes. For example, if a function calculates the average value of cells in a range, it will automatically update whenever any of those values change. Standard functions range from basic calculations like sums and averages to complex formulas like interest rate calculations. Because of their ability to quickly and easily process data, spreadsheets are often used for scientific and financial data analysis. They are also helpful for visualizing data by creating charts. A spreadsheet file can contain multiple worksheets, which are separate tables containing separate batches of data. Formulas from one worksheet can reference cells on another worksheet, which helps keep data organized — for example, a spreadsheet that tracks sales data can keep each region on a separate worksheet while still calculating their quarterly results together. Updated April 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/spreadsheet Copy

Sprite

Sprite A sprite is a bitmap graphic that is designed to be part of a larger scene. It can either be a static image or an animated graphic. Examples of sprites include objects in 2D video games, icons that are part of an application user interface, and small images published on websites. In the 1980s and for most of the 1990s, sprites were the standard way to integrate graphics into video games. Graphic artists created small 2D images that were used to represent characters and other objects. Developers referenced these sprites in the source code and assigned properties such as when the sprites were displayed and how they interacted with other sprites. For example, in a side-scroller, such as Super Mario Bros, the sprite of an enemy Koopa would turn into a flattened Koopa when Super Mario jumped on it. Today, some video games still use 2D sprites, but most mainstream games use 3D polygons instead. Since computers and gaming consoles now have dedicated 3D video cards, they can actually render 3D objects more efficiently than 2D sprites. While sprites have become less common in modern video games, they are still used by software developers for other purposes. For example, sprites are often used to add buttons, symbols, and other user interface elements to software programs. Developers can attach actions to sprites within the user interface, such as playing an animation or changing the current view of the window when the sprite is clicked. Sprites are especially useful for adding custom graphics that are not natively supported by the operating system's API. Sprites are also used on the Web for navigation buttons and for adding visual appeal to webpages. In recent years, sprite sheets have become a popular way for web developers to load website graphics. By combining a large number of sprites into a single image, all the sprites can be downloaded and cached by a user's browser with a single request to the server. The images are then displayed using CSS properties that define the locations of individual sprites within the image. Updated February 10, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sprite Copy

SPX

SPX Stands for "Sequenced Packet Exchange." SPX is networking protocol primarily used by Novell Netware, but is also supported by other operating systems. It is now considered a legacy protocol since it has largely been replaced by TCP/IP. SPX is the transport layer of the the IPX/SPX protocol and IPX is the network layer. SPX handles the connections between systems while IPXhandles the actual transfer of data. Therefore, SPX commands are used to establish, confirm, and close connections, while IPX commands are used to send and receive data packets. SPX is similar to TCP and IPX is similar to IP in the TCP/IP protocol. The IPX/SPX (or SPX/IPX) protocol originally handled the transport and network commands for the NetWare Core Protocol (NCP). A system running NCP Server (typically a Linux machine) could receive connections from computers running NCP client software. The NCP protocol provided functions for accessing files and directories, syncing clocks, and messaging between systems. When Novell enabled NCP servers to use the universal TCP/IP protocol, it soon replaced the proprietary NCP protocol suite. NOTE: Micro Focus acquired Novell in November, 2014. Updated March 7, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/spx Copy

Spyware

Spyware Spyware is a type of malware designed to spy on the victim's computer activity and report it to another computer over the Internet. Spyware can capture information like Web browsing habits, e-mail messages, usernames and passwords. It generally does not harm files or applications since its goal is to remain undetected as long as possible while it gathers and transmits data. Hackers that deploy spyware are often interested in information they can use directly, or that is profitable to sell to a third party. Some spyware is designed to steal information that could be used for identity theft or other financial fraud — bank account login information, personal data, and cryptocurrency wallets. Other types of spyware merely monitor computer activity to inject advertisements into webpages. Some hackers deploy spyware to target a specific person known to the hacker, often to monitor the victim's online activity and physical location in cases of stalking. There are several methods that spyware can use to monitor a victim's activity. Keyloggers are a common component of spyware, recording every keystroke a user makes to intercept usernames and passwords. Spyware often tracks a user's web activity, much like cookies do. Some spyware can also capture images and video recordings of the user's screen; some can even control a computer's microphone and webcam. While some spyware gets installed through an exploit in a web browser or operating system, most spyware is installed by the victim unknowingly in a Trojan Horse or included with legitimate-looking software. Fortunately, antivirus software can catch and remove most spyware. Updated January 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/spyware Copy

SQL

SQL Stands for "Structured Query Language." SQL is a query language used to access and modify information in a database. Web developers use SQL to connect websites and web applications to databases, creating dynamic webpages that can display up-to-date information each time they load. SQL is a standard language for database queries, used by many different database management systems like Oracle Database and MySQL. Websites and web apps use SQL statements to query relational databases, which store data in tables, rows (representing database entries), and columns (for those entries' attributes). For example, an e-commerce website that manages its inventory in a database would store its products in a table — each product as a row and information about each product, like price, size, and color, as columns. When you view an item's webpage, the web server uses a scripting language like PHP to make one or more SQL queries to retrieve that product's information, then includes the results in the HTML page. If that product's price changes in the database, that new information will appear the next time you load that webpage. An SQL query statement in MySQL Workbench and several rows of results The most common SQL command is SELECT, which retrieves information from a table. It can pull all the information in a table, or only records that meet certain criteria. For example, the following query retrieves all customer records from the specified ZIP code. SELECT * FROM Customers WHERE Zip='55400'; The query below fetches the price of a particular product: SELECT Price FROM Products WHERE ProductID=47988; The UPDATE command changes the information stored in a specified table, row, and column. INSERT INTO creates new records, while DELETE FROM removes a record. NOTE: If a website's developer does not properly validate user input and control access to their databases, it may be vulnerable to an SQL injection attack. An SQL injection allows an attacker to send malicious SQL statements to a database that would normally not be allowed — for example, retrieving a full list of customer names, addresses, and credit card numbers. Updated March 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sql Copy

SQL Injection

SQL Injection An SQL injection is a form of cyberattack that targets database-linked websites and web applications. These attacks exploit poorly-configured website and database server software to trick a server into parsing SQL statements it otherwise would not carry out. These attacks may reveal sensitive personal and financial information, allow an attacker to change or delete information stored in a database, or provide other unauthorized access. A website is vulnerable to an SQL injection when it accepts user input from a text form and includes it as part of an SQL query. Instead of filling out the form field with expected information (like a username), the attacker enters a text string that modifies the SQL query. For example, a hypothetical query might start with SELECT UserID, Name, Password FROM Users WHERE UserID = , followed by the input from a User ID form field. If the user enters their User ID, the query works as expected and pulls up their information. An attacker injects unexpected text into a query statement to change it However, if the server inserts input text into the query unrestricted, an attacker can use it to modify the SQL query statement itself. If instead of a User ID, someone enters the text 317 OR 1=1, the query becomes SELECT UserID, Name, Password FROM Users WHERE UserID = 317 OR 1=1;. Since OR 1=1 always resolves true, it overrides the User ID; the query instead returns the User IDs, names, and passwords of all entries in the Users table. Other types of code injection may allow an attacker to log in as a site administrator, delete records or tables from a database, or (in some extreme cases) inject and run shell scripts and other code. A website administrator can prevent SQL injection attacks by keeping the server software up to date. It is also important to "sanitize" database inputs by restricting input text or by filtering out certain characters. Website administrators may also parameterize their SQL queries — using placeholders in the query that refer to the user input instead of inserting it directly into the query. Updated March 20, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sql_injection Copy

SRAM

SRAM Stands for "Static Random Access Memory." I know it is tempting to pronounce this term as "Sram," but it is correctly pronounced "S-ram." SRAM is a type of RAM that stores data using a static method, in which the data remains constant as long as electric power is supplied to the memory chip. This is different than DRAM (dynamic RAM), which stores data dynamically and constantly needs to refresh the data stored in the memory. Because SRAM stores data statically, it is faster and requires less power than DRAM. However, SRAM is more expensive to manufacture than DRAM because it is built using a more complex structure. This complexity also limits the amount of data a single chip can store, meaning SRAM chips cannot hold as much data as DRAM chips. For this reason, DRAM is most often used as the main memory for personal computers. However, SRAM is commonly used in smaller applications, such as CPU cache memory and hard drive buffers. It is also used in other consumer electronics, from large appliances to small children's toys. It is important to not confuse SRAM with SDRAM, which is a type of DRAM. Updated February 9, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sram Copy

SRE

SRE Stands for "Site Reliability Engineering." SRE is a structured approach to software development that originated at Google. The goal of SRE is to create and maintain software applications that are reliable and scalable. SRE is similar to DevOps, but is developer-focused. Instead of requiring two separate teams (development and operations), SRE developers work as a unified team to produce reliable software. The ideal SRE team includes developers with different specialties so that each developer can provide beneficial insight. SRE focuses on stability rather than agility and proactive engineering rather than reactive development. Site reliability engineering is designed to give developers more freedom to create innovative software solutions. By establishing reliable software systems with redundancy and safeguards in place, developers are not limited by traditional operations protocols. For example, in a DevOps team, the operations manager may need to approve each software update before it is published. In SRE, developers may be allowed to release updates as needed. Since SRE is developer-focused, the manager of an SRE team must have development experience, not just operations knowledge. An SRE manager may actively help with software development instead of merely overseeing it. NOTE: SRE is also short for "Site Reliability Engineer," a title given to a member of a site reliability engineering team. Updated November 16, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sre Copy

sRGB

sRGB Stands for "Standard RGB" (RGB stands for "Red Green Blue"). sRGB is a color space that defines a range of colors that can be displayed on screen on in print. It is the most widely used color space and is supported by most operating systems, software programs, monitors, and printers. Every color on a computer screen is comprised of individual red, green, and blue values. The sRGB specification make sure colors are represented the same way across different software programs and devices. For example, if you select the sRGB color space in your image editor and for your printer (which is typically the default setting), the colors produced by the printer will closely match those on the screen. If your color spaces do not match, the printed colors may appear noticeably different than the ones you see in your software program. If you select the "Adobe RGB" color space in Photoshop, then print to an printer that is set to sRGB, the printed colors may appear dull compared to those on the screen. This is because the Adobe RGB color space has a wider range of colors than sRGB. Similarly, an image saved with an Adobe RGB color profile may appear to have less color when viewed on the web, since web browsers use sRGB as the default color space. Monitors and printers have unique tonal qualities, so colors may not appear exactly same on different devices or from screen to print, even when the sRGB color profile is used. Therefore, high-end devices, such as those used in desktop publishing, allow for manual calibration to adjust for slight differences in color. Even without calibration, sRGB provides a level of consistency for displaying colors, which is especially important across different platforms. History The sRGB color space was created by Microsoft and Hewlett-Packard in 1996. It was designed to provide a universal color space for devices and software programs without the need for embedded ICC (International Color Consortium) profiles. The IEC (International Electrotechnical Commission) standardized the sRGB specification in 1999 as "IEC 61966-2-1:1999." Several decades later, sRGB is still the standard color space for 8-bit color. However, modern displays, which can produce 10-bit or 12-bit color, may also support HDR, which provides a much wider range of colors than sRGB. Updated November 14, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/srgb Copy

SRM

SRM Stands for "Storage Resource Management." SRM is the process of managing large amounts of data. It includes efficiently managing existing data, backing up data, and scaling data storage solutions to grow with increasing storage requirements. SRM software monitors and analyzes data storage usage for a specific storage environment, such as a local storage area network (SAN) or a cloud-based storage system. The software allocates data across multiple storage devices, including HDDs, SSDs, so that the storage is distributed efficiently. It can alert a system administrator when storage devices are failing or when additional storage is needed. Many SRM applications also manage data backups. Some back up data on-the-fly, while others perform backups at regular intervals. A storage network managed by an SRM may have a failover system where a backup storage device is activated if the primary storage device fails. Storage resource management also supports scalability. It enables the addition of new storage devices as more storage is needed. SRM software can detect new storage devices and automatically distribute existing and new data across them. This makes it easy to add extra data capacity to storage networks as needed. As data usage continues to grow, large businesses and organizations may find themselves managing several petabytes or even exabytes of data. SRM technology makes this a manageable task by efficiently organizing data, backing it up, and providing a simple means of adding new storage. Updated October 18, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/srm Copy

SSD

SSD Stands for "Solid State Drive." An SSD is a type of mass storage device similar to a hard disk drive (HDD). It supports reading and writing data and maintains stored data in a permanent state even without power. Internal SSDs connect to a computer like a hard drive, using standard IDE or SATA connections. While SSDs serve the same function as hard drives, their internal components are much different. Unlike hard drives, SSDs do not have any moving parts (which is why they are called solid state drives). Instead of storing data on magnetic platters, SSDs store data using flash memory. Since SSDs have no moving parts, they don't have to "spin up" while in a sleep state and they don't need to move a drive head to different parts of the drive to access data. Therefore, SSDs can access data faster than HDDs. SSDs have several other advantages over hard drives as well. For example, the read performance of a hard drive declines when data gets fragmented, or split up into multiple locations on the disk. The read performance of an SSD does not diminish based on where data is stored on the drive. Therefore defragmenting an SSD is not necessary. Since SSDs do not store data magnetically, they are not susceptible to data loss due to strong magnetic fields in close proximity to the drive. Additionally, since SSDs have no moving parts, there is far less chance of a mechanical breakdown. SSDs are also lighter, quieter, and use less power than hard drives. This is why SSDs have become a popular choice for laptop computers. While SSDs have many advantages over HDDs, they also have some drawbacks. Since the SSD technology is much newer than traditional hard drive technology, the price of SSDs is substantially higher. As of early 2011, SSDs cost about 10 times as much per gigabyte as a hard drive. Therefore, most SSD drives sold today have much smaller capacities than comparable hard drives. They also have a limited number or write cycles, which may cause their performance to degrade over time. Fortunately, newer SSDs have improved reliability and should last several years before any reduction in performance is noticeable. As the SSD technology improves and the prices continue to fall, it is likely that solid state drives will begin to replace hard disk drives for most purposes. Updated January 13, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ssd Copy

SSH

SSH Stands for "Secure Shell." SSH is a networking protocol for securely connecting to one computer from another across a network. It allows a user to remotely log in to a computer where they may view data and issue commands. It also supports secure file transfers using SFTP. SSH uses encryption to protect the connection against eavesdropping, even if used over an insecure network connection. Unix, Linux, and macOS have long included both SSH server and client programs. Windows 10 and later versions also include SSH client and server, accessible through the command line. Using an SSH client to connect to another computer gives you the same command-line access that you would have if you were at that computer's keyboard; use Unix commands when connected to a Unix, Linux, or macOS computer, and Windows / DOS commands when connected to a Windows computer. SSH server software also runs on some enterprise-level networking equipment like routers to allow network administrators remote access to their hardware. An SSH connection to a remote computer SSH was first introduced as a secure alternative to the Telnet protocol, which lacked encryption and sent every command as plain text. SSH encrypts all traffic traveling over the protocol to prevent unauthorized people in the middle from accessing and reading the traffic. While the most common use for SSH is for remote terminal sessions, it can also work as a tunnel for other protocols to securely transmit data. Updated June 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ssh Copy

SSID

SSID Stands for "Service Set Identifier." An SSID is a name assigned to a Wi-Fi network. A network's SSID is a case-sensitive string up to 32 characters long, containing letters, numbers, spaces, and other special characters. Wi-Fi access points publicly broadcast their SSIDs to nearby devices to advertise an available connection. If multiple Wi-Fi networks are available in one place, unique SSIDs help someone select the correct network. Wi-Fi routers and access points ship configured with a built-in default SSID, but a network's owner can change it. Default SSIDs often include the manufacturer's name with a random series of numbers and letters, making them technically unique but easy to mix up when several with similar names are nearby. If a network's SSID changes, every other device will see it as a new network and needs to reconnect to it manually. A Wi-Fi network owner can also hide the SSID, which stops the router or access point from broadcasting its SSID. It will still have an SSID, and anyone trying to connect to that network still needs to enter it along with the network's password. Even if a network's SSID is hidden, every data packet sent over that wireless network still includes it to direct it to the right destination. Therefore, anyone actively scanning network traffic can still identify the hidden network's SSID. NOTE: In addition to the network's SSID, Wi-Fi access points also broadcast a BSSID (Basic Service Set Identifier) that uniquely identifies the networking device itself. In most cases, an access point's BSSID is its MAC address. Updated December 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ssid Copy

SSL

SSL Stands for "Secure Socket Layer." SSL is a technology for authenticating and encrypting data transfers over the Internet. It encrypts data transfers between a user's web browser and remote web servers, protecting user data like login credentials and credit card numbers. SSL was first released to the public in 1995 and was replaced in 1999 by a successor technology called TLS — although the successor is often also referred to as SSL. SSL encrypts data traveling over the Internet to protect it from eavesdropping by other devices along the route. The HTTPS protocol uses SSL and TLS to secure regular web traffic, and other protocols for email (SMTP and IMAP), VPN connections, and file transfers (FTPS) also use it to encrypt and protect data. SSL uses authentication certificates (called SSL certificates) to authenticate the identities of the web server and its owner and administrator. SSL encrypts a data connection between a client's web browser and a server with a valid SSL certificate When first introduced, SSL was primarily used by e-commerce websites to protect a customer's credit card number, address, and other personal information during transactions. It was also used to obscure the username and password used during the login process for any site that allowed users to create accounts. Eventually, many websites started using SSL encryption for all data transfers between a client and server to keep browsing activity private thanks to incentives by search engines (who would reward secure sites by placing them higher in search results) and web browsers (which began labeling sites not using SSL as "not secure"). SSL vs TLS SSL technology was replaced in 1999 by the release of TLS 1.0. TLS addressed several shortcomings and vulnerabilities present in SSL. The first few releases of TLS were backward compatible with SSL, allowing older web browsers and website hosts to continue operating using a lower level of security. TLS 1.2, released in 2008, finally removed backward compatibility after nearly a decade of support. TLS improves on the earlier SSL technology in several ways. It streamlined the handshake process to require fewer steps and speed up connections. It also uses more modern encryption algorithms like AES and ChaCha20-Poly1305 instead of the older DES and RC4 algorithms used by SSL. Updated December 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ssl Copy

SSO

SSO Stands for "Single Sign-On." SSO is an authentication method that allows a person to use a single set of login credentials for multiple websites or services. Using SSO minimizes the number of times a user has to log in and the number of passwords they need to remember. SSO is often used alongside multi-factor authentication (MFA) or two-factor authentication (2FA) to increase the level of security used for the single set of credentials. Developers of software applications and web apps can choose to build support for an SSO method into their apps. Google, Apple, and Facebook all provide SSO services for individuals, allowing someone to sign in to a new app or website without creating a separate account for it. Several other companies provide SSO products focused on the more robust needs of large businesses. Separate services operated by a single company often use SSO to allow users to move between those services without signing in again. For example, someone can sign into their Gmail account, then switch over to Google Docs or YouTube without signing in again. Logging in via SSO requires that all the services support a common identity provider. When the user first logs in to one of the services, it passes the sign-on process to the identity provider. After the user signs in with their username and password, the identity provider creates an authentication token establishing the user's credentials and stores that token in the user's web browser or the identity provider's server. When the user tries to access another service that supports the same SSO method, the service checks the token (in the web browser or on the identity provider's server) and, if successful, grants the user access. Updated November 11, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/sso Copy

Stack

Stack In computing, a stack is a data structure used to store a collection of objects. Individual items can be added and stored in a stack using a push operation. Objects can be retrieved using a pop operation, which removes an item from the stack. When an object is added to a stack, it is placed on the top of all previously entered items. When an item is removed, it can either be removed from the top or bottom of the stack. A stack in which items are removed the top is considered a "LIFO" (Last In, First Out) stack. You can picture a LIFO stack as a deck of cards where you lay individual cards on the deck, then draw cards from the top. In a "FIFO" (First In, First Out) stack, items are removed the bottom. You can picture a FIFO stack as a row in a vending machine where items are dispensed in the order they were placed in the machine. Stacks have several applications in commuter programming. LIFO stacks, for example, can be used to retrieve recently used objects, from a cache. FIFO stacks may be used to ensure data is retrieved in the order it was entered, which may be used for processing data in a queue. While stacks are commonly used by software programmers, you will typically not notice them while using a program. This is because the creation of stacks and push and pop operations are performed in the background while an application is running and are not visible to the user. However, if a stack runs out of memory, it will cause a "stack overflow." If not handled correctly by the program, a stack overflow may generate an error message or cause the program to crash. NOTE: The term "stack" may also refer to a protocol stack, which consists of multiple network protocols that work together. Each protocol is categorized into one of seven different layers defined in the OSI model. Updated July 23, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/stack Copy

Standalone

Standalone A standalone device is able to function independently of other hardware. This means it is not integrated into another device. For example, a TiVo box that can record television programs is a standalone device, while a DVR that is integrated into a digital cable box is not standalone. Integrated devices are typically less expensive than multiple standalone products that perform the same functions. However, using standalone hardware typically allows the user greater customization, whether it be a home theater or computer system. Standalone can also refer to a software program that does not require any software other than the operating system to run. This means that most software programs are standalone programs. Software such as plug-ins and expansion packs for video games are not standalone programs since they will not run unless a certain program is already installed. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/standalone Copy

Standby

Standby Standby is a power-saving mode for computers and other electronic devices. When a device enters a standby mode, it uses just enough power to keep the current system state suspended in RAM while cutting power to everything else. When the device receives the correct signal (typically any user input on a keyboard or mouse, or a special Wake-on-LAN command over the network), it can wake from standby mode almost instantly. Standby mode is also known as "sleep" mode. When a computer goes to sleep, it does not fully shut down. Instead, it cuts power to most of its hardware while maintaining it for critical components. First, the computer supplies power to its BIOS or UEFI firmware, which is responsible for power management. It also continues to power the system's RAM, which uses volatile memory that requires electricity to maintain its contents. Standby mode also feeds some power to the computer's USB ports, network adapter, and any physical buttons that listen for user input or other wake-up events. The CPU, GPU, storage disks and most other components lose power entirely or receive only a slight trickle. The Power and Sleep settings in Windows 10 let you customize how long your computer waits before going to sleep Most computers enter standby mode automatically after a set period of inactivity. You can customize how long your computer must sit idle before sleeping in your system settings. Laptops automatically enter standby (or a hybrid sleep similar to hibernate mode) whenever you close the lid, helping to save battery power while it's not in use. If the computer loses power while in standby mode (or if a laptop's battery drains completely), it loses the system state saved in RAM and must go through a full boot sequence instead. Updated September 28, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/standby Copy

Start Menu

Start Menu The Start menu is a feature of the Windows operating system that provides quick access to programs, folders, and system settings. By default, the Start menu is located in the lower-left corner of the Windows desktop. In Windows 95 through Windows XP, the Start menu can be opened by clicking the "Start" button. In newer versions of Windows, such as Windows Vista and Windows 7, the Start menu can be opened by clicking the Windows logo. Some keyboards also have a Windows key that opens the Start menu when pressed. The Start menu contains two primary columns. The left column contains a list of the most commonly used programs, as well as an "All Programs" submenu, which displays all the currently installed applications. The bottom of the left column includes a search box, which can be used to search for programs and files. The right column contains links to common folders, such the Documents, Pictures, and Music folders. It also includes links to the Control Panel, Default Programs, and other system settings. The bottom of the right column includes a "Shut down" button, which can be used to turn off or restart the computer, put the computer to sleep, or switch users. The Start menu is an important part of the Windows user interface since it provides shortcuts to many commonly accessed items. By familiarizing yourself with all the items in the Start menu, you may be able to use your computer more efficiently. If you are feeling extra creative, you can customize the functionality of the Start menu by right-clicking a blank area within the Start menu and selecting "Properties." The resulting window will allow you to modify the appearance and behavior of items within the Start menu. Updated May 19, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/start_menu Copy

Static Website

Static Website A static website contains Web pages with fixed content. Each page is coded in HTML and displays the same information to every visitor. Static sites are the most basic type of website and are the easiest to create. Unlike dynamic websites, they do not require any Web programming or database design. A static site can be built by simply creating a few HTML pages and publishing them to a Web server. Since static Web pages contain fixed code, the content of each page does not change unless it is manually updated by the webmaster. This works well for small websites, but it can make large sites with hundreds or thousands of pages difficult to maintain. Therefore, larger websites typically use dynamic pages, which can be updated by simply modifying a database record. Static sites that contain a lot of pages are often designed using templates. This makes it possible to update several pages at once, and also helps provide a consistent layout throughout the site. Updated June 12, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/staticwebsite Copy

Status Bar

Status Bar A status bar is a small area at the bottom of a window. It is used by some applications to display helpful information for the user. For example, an open folder window on the desktop may display the number of items in the folder and how many items are selected. Photoshop uses the status bar to display the size of the current image, the zoom percentage, and other information. Web browsers use the status bar to display the Web address of a link when the user moves the cursor over it. It also shows the status of loading pages, and displays error messages. If you don't see the status bar in your Web browser or another program, you may be able to enable it by selecting "Show Status Bar" from the application's View menu. If this option is not available in the View menu, the program may not use a status bar. Some programs use a "status window" instead to show the current activity in the application. The option for displaying this window is usually found in the "Window" menu. Updated May 10, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/statusbar Copy

Steganography

Steganography Steganography is the art of concealing information. In computer science, it refers to hiding data within a message or file. It serves a similar purpose to cryptography, but instead of encrypting data, steganography simply hides it from the user. Invisible ink is an example of steganography that is unrelated to computers. A person can write a message with clear or "invisible" ink that can only be seen when another ink or liquid is applied to the paper. Similarly, in digital steganography, the goal is to hide information from users except those who are meant to see or hear it. Steganography Examples Since steganography is more of an art than a science, there is no limit to the ways steganography can be used. Below are a few examples: Playing an audio track backwards to reveal a secret message Playing a video at a faster frame rate (FPS) to reveal a hidden image Embedding a message in the red, green, or blue channel of an RGB image Hiding information within a file header or metadata Embedding an image or message within a photo through the addition of digital noise Steganography can also be as simple as embedding a secret message in plain text. Consider the following sentence: "This example contains highly Technical expressions regarding modern simulations." The first letter of each word produces the hidden phrase, "TechTerms." Updated February 24, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/steganography Copy

Storage Capacity

Storage Capacity Storage capacity refers to how much disk space one or more storage devices provides. It measures how much data a computer system may contain. For an example, a computer with a 500GB hard drive has a storage capacity of 500 gigabytes. A network server with four 1TB drives, has a storage capacity of 4 terabytes. Storage capacity is often used synonymously with "disk space." However, it refers to overall disk space, rather than free disk space. For example, a hard drive with a storage capacity of 500GB may only have 150MB available if the rest of the disk space is already used up. Therefore, when checking your computer to see if it meets a program's system requirements, make sure you have enough free disk space to install the program. If you need more disk space, you can increase your computer's storage capacity by adding another internal or external hard drive. Updated August 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/storagecapacity Copy

Storage Device

Storage Device A computer storage device is any type of hardware that stores data. The most common type of storage device, which nearly all computers have, is a hard drive. The computer's primary hard drive stores the operating system, applications, and files and folders for users of the computer. While the hard drive is the most ubiquitous of all storage devices, several other types are common as well. Flash memory devices, such as USB keychain drives and iPod nanos are popular ways to store data in a small, mobile format. Other types of flash memory, such as compact flash and SD cards are popular ways to store images taken by digital cameras. External hard drives that connect via Firewire and USB are also common. These types of drives are often used for backing up internal hard drives, storing video or photo libraries, or for simply adding extra storage. Finally, tape drives, which use reels of tape to store data, are another type of storage device and are typically used for backing up data. Updated November 16, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/storagedevice Copy

Streaming

Streaming Streaming is the playback of music, video, or other media over the Internet while it progressively downloads. Unlike downloading a file directly, a copy of the streamed file is not retained on the device once the playback stops. Media can be streamed to computers, mobile devices, and dedicated streaming devices that connect to a TV like an Apple TV or an Amazon Fire Stick. Most music and video streaming services offer content on demand, which lets users listen or watch something whenever they choose. Some services will stream content live as it is produced, known as "live streaming." Other services allow users to stream their own content from one device to another, across a local network or the Internet. Streaming services use compression to optimize the size of streaming media to save on their bandwidth usage and ensure that content reaches viewers with minimal buffering. Most even keep multiple copies of a file encoded at different bitrates, delivering the best version for the Internet speed available to the viewer. Common audio codecs include MP3 and AAC, while video often uses H.264, HEVC, or VP9. Video and audio streams are then sent across the Internet using specialized application protocols designed for streaming media, most of which operate over the UDP transport protocol. Updated October 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/streaming Copy

String

String In computer science, a string is a fundamental data type used to represent text, as opposed to numeric data types like integers or floating-point numbers. It contains a sequence or "string" of characters, which may include letters, numbers, symbols, and spaces. In most programming languages, strings are enclosed in quotation marks to differentiate them from other data types, such as numbers or variable names. For instance: name = "Peter"; // This is a string age = "44"; // This is a string ID = 123; // This is a number A computer programmer may compare strings to see if they match. For example, the following comparison checks if phrase1 and phrase2 are equal. if (phrase1 === phrase2) { // Code to execute if the values are equal } The three equal signs in the comparison above check if the strings match AND if they are the same data type. For example, comparing the strings "12345" and "12345" would return TRUE, but comparing the string "12345" with the integer 12345 would return FALSE. Additionally, comparing any two non-identical strings, such as "12345" and "12344", would return FALSE. A string's length is determined by counting its characters. Most programming languages have a string length" function (such as strlen() in PHP) that returns the length of a given string. In older programming languages (like C), strings are terminated with a marker called a null character () to signify the end of the string in memory. Modern programming languages provide various functions to evaluate and manipulate strings, such as finding substrings, providing close or "fuzzy" string matches, and replacing text. These types of string operations are fundamental to the operation of search engines and generative AI. Updated December 18, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/string Copy

Struct

Struct A struct (short for structure) is a data type available in C programming languages, such as C, C++, and C#. It is a user-defined data type that can store multiple related items. A struct variable is similar to a database record since it may contain multiple data types related to a single entity. Below is an example of an article defined as a struct in the C programming language. struct Article {     int    articleID;     char   title[120];     char   date[10];     char   author[60];     char   content[4000]; } The above struct "Article" contains both integer and character array data types. It can be used to store all the information about an article in a single variable. Since structs group the data into a contiguous block of memory, only a single pointer is needed to access all the data of a specific article. Structs are similar to classes used in object oriented programing languages, such as Objective C and C#. The primary difference between the two data structures is that structs are public while classes are private by default. That means struct variables can be accessed and modified by any function within the code, while classes may only be accessed by the function in which they are defined. Updated November 21, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/struct Copy

Stylus

Stylus A stylus is an input device that looks and acts like a pen. Instead of drawing with ink, it sends a digital signal to a compatible touchscreen, which interprets the pressure as drawing on the screen. A basic stylus is a thin plastic rod tapered on the end like a pen. Early PDAs such as the Apple Newton and Palm Pilot came with this type of stylus. Many digital signature pads used for credit card purchases also use a plastic stylus. Simply the pressure of the plastic tip is used to draw a digital image on the screen. Modern smartphones and tablets have advanced touchscreens that are designed to be controlled by human touch. In order to prevent unintentional input, they ignore contact from other sources — like a stylus. Therefore, modern touchscreen devices, like the Samsung Galaxy Note and the Apple iPad Pro have custom styluses that include built-in sensors and transmitters. The Apple Pencil, for example, connects to the iPad via Bluetooth. The iPad Pro combines information from both the stylus and the touchscreen to accurately record the motion of drawing on the screen. NOTE: Any stylus that contains sensors or connects wirelessly to a device is battery-powered. Therefore, a modern stylus may need to be charged in order to work correctly. Updated September 23, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/stylus Copy

Subdirectory

Subdirectory Computers store data in a series of directories. Each directory, or folder, may contain files or other directories. If a directory is located within another directory, it is called a subdirectory (or subfolder) of that folder. Subdirectories may refer to folders located directly within a folder, as well as folders that are stored in other folders within a folder. For example, the main directory of a file system is the root directory. Therefore, all other folders are subdirectories of the root folder. The contents of your computer's operating system are organized using subdirectories. After all, you can imagine how hard it would be to locate documents on your computer if they were all stored in a single folder. By using subdirectories, users can navigate through folders in a logical hierarchy. For example, in Windows, the C: drive contains a "Documents and Settings" subdirectory. Within this directory, are subdirectories for each user of the computer. Within each user's folder are other subdirectories, including "Desktop," "My Documents," and "Start Menu." In Mac OS X, the main hard disk contains a "Users" folder. This directory contains a list of all the users' home folders. Within each user's home folder are several other subdirectories, such as "Documents," "Movies," and "Music." Both Windows and Mac users can create subdirectories to further organize data within these predefined folders. This can be useful when categorizing data on your hard drive. For example, if you store all your digital photos within the "Pictures" folder, you might create subdirectories that are named by date, such as "2009-06-01," "2009-07-04," "2009-08-15," etc. You could also create subdirectories based on the source of each image, such as "Digital Photos," "Screenshots," and "Scanned Images." Regardless of what folder names you choose, creating subdirectories can save you a lot of time when browsing through files in the future. Updated August 24, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/subdirectory Copy

Subnet Mask

Subnet Mask A subnet mask is a number that defines a range of IP addresses available within a network. A single subnet mask limits the number of valid IPs for a specific network. Multiple subnet masks can organize a single network into smaller networks (called subnetworks or subnets). Systems within the same subnet can communicate directly with each other, while systems on different subnets must communicate through a router. A subnet mask hides (or masks) the network part of a system's IP address and leaves only the host part as the machine identifier. It uses the same format as an IPv4 address — four sections of one to three numbers, separated by dots. Each section of the subnet mask can contain a number from 0 to 255, just like an IP address. For example, a typical subnet mask for a Class C IP address is: 255.255.255.0 In the example above, the first three sections are full (255 out of 255), meaning the IP addresses of devices within the subnet mask must be identical in the first three sections. The last section of each computer's IP address can be anything from 0 to 255. If the subnet mask is defined as 255.255.255.0, the IP addresses 10.0.1.99 and 10.0.1.100 are in the same subnet, but 10.0.2.100 is not. A subnet mask of 255.255.255.0 allows for close to 256 unique hosts within the network (since not all 256 IP addresses can be used). If your computer is connected to a network, you can view the network's subnet mask number in the Network control panel (Windows) or System Preference (macOS). Most home networks use the default subnet mask of 255.255.255.0. However, an office network may be configured with a different subnet mask such as 255.255.255.192, which limits the number of IP addresses to 64. Large networks with several thousand machines may use a subnet mask of 255.255.0.0. This is the default subnet mask used by Class B networks and provides up to 65,536 IP addresses (256 x 256). The largest Class A networks use a subnet mask of 255.0.0.0, allowing for up to 16,777,216 IP addresses (256 x 256 x 256). Updated February 14, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/subnet_mask Copy

Subpixel

Subpixel A subpixel is a subdivision of a single pixel on a screen. It generally refers to one of the three RGB elements of a single pixel. Computer monitors, televisions, and smartphone and tablet screens show images on a large grid of pixels. Each of those pixels consists of several subpixels that individually light up in a specific color—red, green, or blue, collectively referred to as RGB (some OLED displays use an additional white subpixel for extra brightness). If all of a pixel's subpixels light up to 100%, the pixel will appear white; if they're all turned off to 0%, it will appear black. By lighting up these subpixels to different levels, a screen can show millions of colors. A simple 4x4 subpixel arrangement on an LCD screen (left), and a PenTile arrangement on an OLED screen (right) Subpixels may be arranged in different patterns, depending on the type of display. An LCD typically arranges its subpixels in a simple grid of three rectangular subpixels that form a square. Each one takes up an equal amount of space. OLED screens, on the other hand, will use other subpixel arrangements. For example, a PenTile arrangement is designed for adjacent pixels to share subpixels, allowing for the same effective resolution with fewer subpixels. NOTE: Computers connected to LCD and OLED monitors can enable subpixel anti-aliasing. This is a method of anti-aliasing text that takes into account the arrangement of the subpixels on the screen to make the text sharper than monochrome anti-aliasing. Updated October 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/subpixel Copy

Subscript

Subscript A subscript is a character or string that is smaller than the preceding text and sits at or below the baseline. Subscripts have many scientific applications, including mathematics, computer science, and chemistry. Below is an example of a math function that uses subscripts to define the Fibonacci sequence: Fn = Fn-1 + Fn−2 The small "n" is a subscript. When used in the context "Fn," it refers to a function evaluated for the value "n." The text n-1 and n-2 are also subscripts that define previous values of "n" in the sequence. In computer science, subscripts can be used to specify a number system. For example, the decimal or "denary" number 200 is equal to 11001000 in binary and C8 in hexadecimal. Instead of writing out the full words, you could describe the equality of the three numbers with the following statement: 20010 = 110010002 = C816 Subscripts are also used in chemistry to describe chemical compounds. For example, the molecular structure of water, which includes two hydrogen atoms and one oxygen atom, is commonly written: H2O Most modern word processors include a text formatting option that allows you to enter subscript or superscript text. HTML provides the tag for displaying subscript on webpages. If a text editor doesn't provide subscript formatting, you can shrink the font size of the subscript as a workaround. If you are working with plain text, the only option is to enter subscript characters on a separate line below the text. Updated May 14, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/subscript Copy

Supercomputer

Supercomputer As the name implies, a supercomputer is no ordinary computer. It is a high performance computing machine designed to have extremely fast processing speeds. Supercomputers have various applications, such as performing complex scientific calculations, modeling simulations, and rendering large amounts of 3D graphics. They may also be built to simply showcase the leading edge of computing technology. If you are hoping to have a supercomputer on your desk, you may be out of luck. Supercomputers are typically several times the size of a typical desktop computer and require far more power. A supercomputer may also consist of a series of computers, which may fill an entire room. Examples of single machine supercomputers include the early Cray-1 and Cray X-MP systems developed by Cray Research as well as the more recent Blue Gene and Roadrunner systems developed by IBM. System X is an example of a multi-system supercomputer, which was developed by Virginia Tech and is comprised of 1,100 Apple Xserve G5s. Supercomputers cost a fortune to build and are expensive to maintain, which is why only a few exist in the entire world. Furthermore, computing power continues to advance each year, meaning it isn't too long before a ground-breaking supercomputer isn't so super. The good news is that the supercomputers of the past eventually become the personal computers of today. Therefore, your home PC most likely has more computing power than many supercomputers from previous decades. Now that's super cool. Updated September 25, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/supercomputer Copy

Superscalar

Superscalar A superscalar CPU can execute more than one instruction per clock cycle. Because processing speeds are measured in clock cycles per second (megahertz), a superscalar processor will be faster than a scalar processor rated at the same megahertz. A superscalar architecture includes parallel execution units, which can execute instructions simultaneously. This parallel architecture was first implemented in RISC processors, which use short and simple instructions to perform calculations. Because of their superscalar capabilities, RISC processors have typically performed better than CISC processors running at the same megahertz. However, most CISC-based processors (such as the Intel Pentium) now include some RISC architecture as well, which enables them to execute instructions in parallel. Nearly all processors developed after 1998 are superscalar. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/superscalar Copy

Superscript

Superscript A superscript is a character or string that is smaller than the preceding text and sits above the baseline. Superscripts have several applications in both math and writing. In math, superscripts are commonly used for exponents. For example: 23 = 8 The calculation above can also be written as 2^3 = 8. The caret symbol (^) is often used as an alternative when superscript text formatting is not available. Both "23" and "2^3" can be described as "two to the third power," where "3" is the exponent. In writing, superscripts have several different uses. Examples include footnotes, ordinal indicators, and trademarks. A footnote is a letter or number, typically placed at the end of a sentence, that refers to a reference or citation. The actual citation, which is preceded by the corresponding letter or number, may be located at the end of the page, document, or literary work. For example, the "c" is the footnote in the sentence below. The population of the United States in 2015 is estimated to be 320 million. c Ordinal indicators may also be written using superscript, such as the following examples: 1st (first), 2nd (second), 3rd (third). Finally, trademarks and service marks are often displayed in superscript, such as ™ and ℠. Most word processors support both subscript and superscript text. HTML provides the tag for displaying superscript in a webpage. If you are editing a plain text document or using a program that doesn't support superscript, you can represent superscript text using other methods. For example, mathematical exponents can be denoted with the caret symbol (^). Footnotes and trademarks can be denoted using parentheses, such as (a) for a footnote or (TM) for a trademark. Ordinal notation does not require superscript and can be written 1st, 2nd, 3rd, etc. Updated May 14, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/superscript Copy

Surface

Surface Surface is a tablet PC developed by Microsoft. The first Surface model was released on October 26, 2012, the same day as Windows 8 and Windows RT. The Surface is a hybrid device that combines the features of a tablet and a laptop. It does not require any external input devices and can be used as a standalone touchscreen tablet. However, the Surface easily connects to Microsoft's custom keyboard, which includes a trackpad that can be used to control a cursor on the screen. The built-in "kickstand" allows the tablet to stand upright, and when combined with the keyboard, makes the Surface look and feel like a laptop. The Surface runs both Windows RT and Windows 8 Pro, though the device initially shipped exclusively with Windows RT. Both operating systems support touchscreen and keyboard/mouse input, which means you can use the Surface however you prefer. For example, if you are on-the-go, you can use the on-screen keyboard to enter text input. If you are at a desk, you can flip out the kickstand and use the physical keyboard. You can also use the keyboard's built-in trackpad or a connected mouse to click on items rather than tapping the screen. The Surface is designed to be a highly portable device and therefore includes built-in wireless communication, such as Wi-Fi (802.11a/b/g/n) and Bluetooth 4.0. It also has a USB port for wired peripherals, a micro SDXC card slot for extra storage, an audio input/output jack, and a micro HDMI video out port. The Surface's magnesium case includes built-in front and rear-facing 720p HD cameras and a 16x9 screen with a resolution of 1366x768 pixels. NOTE: The Surface can be purchased with or without a keyboard. You can choose between either the "touch keyboard," which has a flat, spill-proof surface, or the "type keyboard," which has raised keys and feels more like a traditional keyboard. Both keyboards double as protective covers for the Surface and can be folded to the back of the tablet when not in use. Updated November 1, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/surface Copy

Surge Protector

Surge Protector The surge protector is an important, yet often overlooked part of a computer setup. It allows multiple devices to plugged in to it at one time and protects each connected device from power surges. For example, a home office may have a computer, monitor, printer, cable modem, and powered speakers all plugged into one surge protector, which is plugged into a single outlet in the wall. The surge protector allows many devices to use one outlet, while protecting each of them from electrical surges. Surge protectors, sometimes called power strips, prevent surges in electrical current by sending the excess current to the grounding wire (which is the round part of the plug below the two flat metal pieces on U.S. outlet plugs). If the surge is extra high, such as from a lightning strike, a fuse in the surge protector will blow and the current will prevented from reaching any of the devices plugged into the surge protector. This means the noble surge protector will have given its life for the rest of the equipment, since the fuse is destroyed in the process. While surge protectors all perform the same basic function, they come in many shapes and sizes with different levels of protection. Some may look like basic power strips, while others may be rack mounted or fit directly against the wall. Most surge protectors offer six to ten different outlets. Cheaper surge protectors offer limited protection for surges (under 1000 joules), while more expensive ones offer protection for several thousand joules and include a monetary guarantee on connected devices if a power surge happens. Typically, you get what you pay for, so if you have an expensive computer system, it is wise to buy a quality surge protector that offers at least 1000 joules of protection. Some surge protectors also include line conditioning, which uses an electromagnet to maintain a consistent level of electricity when there are slight variations in current. For example, you might notice your computer monitor or television fade for a moment when you turn on a high-powered device, like a vacuum or air conditioner. A surge protector with line conditioning should prevent connected devices from being affected by these slight variances in current. While you may be able to hook up your computer system without a surge protector, it is important to protect your equipment by using one. You may not need a large, expensive surge protector with line conditioning, but using a quality surge protector for all your electronic devices is a smart choice. Updated December 27, 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/surgeprotector Copy

Swap File

Swap File A swap file is a file that contains data retrieved from system memory, or RAM. By transferring data from RAM to a secondary storage device in the form of a swap file, a computer is able to free up memory for other programs. Swap files are a type of virtual memory, since they are not stored in physical RAM. They extend the amount of available memory a computer can access by swapping memory used by idle processes from RAM to a swap file. This action is also called "paging" and is a common type of memory management. When swapped memory is needed by a program, the data from the swap file is paged back into physical memory. While swap files are a convenient way to increase the amount of available system memory, they can also decrease system performance. For instance, if a computer is using nearly all of its physical memory, the system may need to frequently swap data between RAM and swap files. Since reading data from a secondary storage device (such as an HDD or SSD) is much slower than accessing RAM, repeatedly swapping memory may cause noticeable delays. Swap files are automatically created, accessed, and deleted by the operating system, which means you don't have to worry about maintaining them. However, if you notice your computer is slowing down when you switch between programs, it may indicate your computer is running low on system memory and is frequently swapping memory. You can temporarily alleviate this problem by closing unused programs. The best long term solution is to add more RAM. File extension: .SWP Updated February 26, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/swap_file Copy

Swift

Swift Swift is a high-level programming language developed by Apple and made available in 2014. It is designed for writing apps for Apple platforms, including macOS, iOS, tvOS, and watchOS. The Swift language is based on Objective-C, which was used for NeXTSTEP development in the 1980s, and later macOS and iOS. Swift has similar syntax and maintains the object-oriented features of Objective-C, but provides a more simplified programming experience. For example, Swift code is easier to read and write than Objective-C. It allows several common commands to be combined and does require semicolons (;) at the end of each statement. Additionally, Swift handles several programming obstacles automatically. For example, Swift: initializes variables before they are used handles "nil" (NULL) values explicitly ensures array indices are in bounds prevents integers from overflowing their allotted memory manages memory automatically Apple's Xcode software development IDE has supported Swift since version 6 (released in 2014). Xcode also supports "Swift Playgrounds," a feature that allows programmers to edit Swift code and see the results immediately. For example, the playground may display the source code on the left and an app simulator on the right. Changes to the code update the app simulation on-the-fly. Several playgrounds are included with Xcode to provide an easy way to learn the language. Since Apple develops and maintains the Swift language, it is optimized for Apple hardware. Therefore an iOS app developed in Swift may perform better than a similar app developed in another language. Apple also updates Swift with new features on a regular basis. This allows developers programming in Swift to take advantage of the latest advances in Macs, iPhones, iPads, and other Apple products. File extension: .SWIFT Updated July 8, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/swift Copy

Swipe

Swipe Swipe is a command used primarily with touchscreen devices, such as smartphones and tablets. It is also supported by some laptops with trackpads and desktop computers with trackpad input. A swipe involves quickly moving (or "swiping") your finger across a touchscreen or trackpad. For example, swiping the screen from right to left in a photo viewing application typically displays the next photo. While browsing multiple photos, swiping up or down may allow you to scroll through the photo library. Most smartphones also allow you to swipe left or right to switch between home screens. Devices that support multi-touch may allow you to swipe with multiple fingers to perform different functions. For example, MacBook users can swipe left or right with two fingers to perform the Back or Forward command in a web browser. Swiping up or down with three fingers performs the Exposé command in Mac OS X. Updated June 21, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/swipe Copy

Switch

Switch A switch is a piece of networking hardware that links multiple devices together on a network. Switches are typically small, flat boxes that contain a group of Ethernet ports — as few as 4 on a small home office switch that sits on a desk, or up to several dozen on a rack-mounted enterprise switch. They perform a task known as packet switching to direct traffic within the local area network, forwarding data packets to their destinations. Networking switches occupy a middle ground between an Ethernet hub and a router. Unlike hubs, which broadcast data packets to every device connected to them, switches forward a data packet only to its intended destination. This allows switches to manage a network's bandwidth more efficiently. However, unlike routers, switches can only direct traffic to other devices on the local network. Data packets bound for another network must go through the network's router first. Switches also use MAC addresses to direct data packets, unlike routers which use IP addresses. An 8-port D-Link switch Switches are available in a few configurations, and the best one for a network depends on its size and the needs of its users: Unmanaged switches are small, plug and play switches. These switches lack any extra configuration settings and are best suited for homes and small offices or as a way to add extra Ethernet ports to a larger network setting. Managed switches include a suite of configuration options, usually accessible through a command line. These options allow network administrators to prioritize traffic, enable and disable individual ports, apply quality of service settings, and configure a virtual LAN (VLAN). Smart switches, also called Intelligent switches, sit in a middle ground between unmanaged and managed switches. They provide some configuration options, usually available through a web interface, but not as many as a fully-managed switch. Updated February 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/switch Copy

Symbolic Link

Symbolic Link A symbolic link (or "symlink") is file system feature that can be used to create a link to a specific file or folder. It is similar to a Windows "shortcut" or Mac "alias," but is not an actual file. Instead, a symbolic link is a entry in a file system that points to a directory or file. The system recognizes the symbolic link as the actual directory or file, providing a alternate way to access it. Symbolic links are supported by Mac, Windows, and Unix, but are most commonly found on Unix systems, since Unix is known for having a large (and often confusing) hierarchy of directories. The root folder alone has several different subdirectories, many of which do not have a clearly defined purpose. Examples include bin, dev, etc, lib, tmp, usr, and var. This can make it difficult to find certain files, especially if they are located several directories deep in the file system. Symbolic links help alleviate this problem by creating an easier way to access common files and folders. To create a symlink in Unix, you can type a simple command at the command prompt using the syntax below. (The "-s" part of the command is important, since it tells the system to create a symbolic link.) ln -s [target directory or file] /[shortcut] For example, you can create a symlink "/apachelogs" that points to "/usr/local/apache/logs" using the following command: ln -s /usr/local/apache/logs /apachelogs After entering the command above, you would be able to access the Apache logs directory by simply opening "/apachelogs." While symlinks are recognized by most programs, it may be necessary to enable a program or process to follow symbolic links. For example, in order for an Apache web server to recognize symbolic links, the line "Options +FollowSymLinks" must be added to the httpd.conf file. Updated October 22, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/symbolic_link Copy

SYN Flood

SYN Flood A SYN flood is a type of denial of service (DoS) attack that sends a series of "SYN" messages to a computer, such as a web server. SYN is short for "synchronize" and is the first step in establishing communication between two systems over the TCP/IP protocol. When a server receives a SYN request, it responds with a SYN-ACK (synchronize acknowledge) message. The computer then responds with an ACK (acknowledge) message that establishes a connection between the two systems. In a SYN flood attack, a computer sends a large number of SYN requests, but does not send back any ACK messages. Therefore, the server ends up waiting for multiple responses, tying up system resources. If the queue of response requests grows large enough, the server may not be able respond to legitimate requests. This results in a slow or unresponsive server. Since SYN flooding is a common type of DoS attack, most server software has the capability to detect and stop SYN floods before they have a noticeable effect on the server. For example, if a server receives a large number of SYN requests from the same IP address in a short period of time, it may temporarily block all requests from that location. Distributed denial of service (DDoS) attacks are more difficult to handle since they flood the server from multiple IP addresses. However, these attacks can be limited by using SYN caching or implementing SYN cookies. Both of these methods record IP addresses used for flood attacks. The system then limits the resources the computer will use to respond to requests from these locations. This type of SYN flood protection can be configured directly on server or on a network firewall. Updated February 12, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/syn_flood Copy

Sync

Sync "Sync" is short for synchronize. When you sync a device, such as a cell phone, PDA, or iPod, you synchronize it with data on your computer. This is typically done by connecting the device to your computer via a USB or wireless Bluetooth connection. For example, you might sync the address book stored on your computer with your cell phone to update the contacts. If you have an iPod, you may connect it to your computer to sync songs, videos, and other data using Apple iTunes. When you sync a device with your computer, it typically updates both device and the computer with the most recent information. This is also referred to as "merging" the data. For example, if you have added a phone number to your phone since the last time you synced it with your computer, that number will be added to your computer's address book. Similarly, any numbers entered into the computer's address book since the last sync will be added to the phone. Most syncing programs also remove entries that have been deleted on either the device or the computer since the last sync. Since many devices can be synced with a single computer, the computer is often referred to as the "hub" for syncing portable electronics. For example, you might be able to sync an iPod, Blackberry, and PDA using the same address book on your computer. However, you may need to use a different syncing program for each device, since most use a proprietary software utility to sync with the computer. Common syncing programs include iTunes, The Missing Sync, Palm Desktop, and iSync. Updated April 9, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/sync Copy

Syntax

Syntax Every spoken language has a general set of rules for how words and sentences should be structured. These rules are collectively known as the language syntax. In computer programming, syntax serves the same purpose, defining how declarations, functions, commands, and other statements should be arranged. Many computer programming languages share similar syntax rules, while others have a unique syntax design. For example, C and Java use a similar syntax, while Perl has many characteristics that are not seen in either the C or Java languages. A program's source code must have correct syntax in order to compile correctly and be made into a program. In fact, it must have perfect syntax, or the program will fail to compile and produce a "syntax error." A syntax error can be as simple as a missing parenthesis or a forgotten semicolon at the end of a statement. Even these small errors will keep the source code from compiling. Fortunately, most integrated development environments (IDEs) include a parser, which detects syntax errors within the source code. Modern parsers can even highlight syntax errors before a program is compiled, making it easy for the programmer to locate and fix them. NOTE: Syntax errors are also called compile-time errors, since they can prevent a program from compiliing. Errors that occur in a program after it has been compiled are called runtime errors, since they occur when the program is running. Updated August 12, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/syntax Copy

Syntax Error

Syntax Error A syntax error is an error in the source code of a program. Since computer programs must follow strict syntax to compile correctly, any aspects of the code that do not conform to the syntax of the programming language will produce a syntax error. Unlike logic errors, which are errors in the flow or logic of a program, syntax errors are small grammatical mistakes, sometimes limited to a single character. For example, a missing semicolon at the end of a line or an extra bracket at the end of a function may produce a syntax error. In the PHP code below, the second closed bracket would result in a syntax error since there is only one open bracket in the function. function testFunction() { echo "Just testing."; }} Below is an example of a syntax error in a .SWIFT file detected by Apple Xcode. Some software development IDEs check the source code for syntax errors in real-time, while others only generate syntax errors when a program is compiled. Even if a source code file contains one small syntax error, it will prevent an application from being successfully compiled. Similarly, if you run a script through an interpreter, any syntax errors will prevent the script from completing. In most cases, the compiler or interpreter provides the location (or line number) of the syntax error, making it easy for the programmer to find and fix the error. Updated February 1, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/syntax_error Copy

System Analyst

System Analyst A system analyst is the person who selects and configures computer systems for an organization or business. His or her job typically begins with determining the intended purpose of the computers. This means the analyst must understand the general objectives of the business, as well as what each individual user's job requires. Once the system analyst has determined the general and specific needs of the business, he can choose appropriate systems that will help accomplish the goals of the business. When configuring computer systems for a business, the analyst must select both hardware and software. The hardware aspect includes customizing each computer's configuration, such as the processor speed, amount of RAM, hard drive space, video card, and monitor size. It may also involve choosing networking equipment that will link the computers together. The software side includes the operating system and applications that are installed on each system. The software programs each person requires may differ greatly between users, which is why it is important that the system analyst knows the specific needs of each user. To summarize, the system analyst's job is to choose the most efficient computer solutions for a business, while making sure the systems meet all the company's needs. Therefore, the system analyst must have a solid understanding of computer hardware and software and should keep up-to-date on all the latest technologies. He must also be willing to listen to the constant needs and complaints of the users he builds systems for. Updated November 8, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/systemanalyst Copy

System Call

System Call A system call is a request made by a program to the operating system. It allows an application to access functions and commands from the operating system's API. System calls perform system-level operations, such as communicating with hardware devices and reading and writing files. By making system calls, developers can use pre-written functions supported by the operating system (OS) instead of writing them from scratch. This simplifies development, improves app stability, and makes apps more "portable" between different versions of an OS. How System Calls Work Applications run within an area of memory called user space. A system call accesses the operating system kernel, which runs in kernel space. When an application makes a system call, it must first request permission from the kernel. This may be done with an interrupt request, which pauses the current process and transfers control to the kernel. If the request is allowed, the kernel processes the request, such as creating or deleting a file. When the operation is complete, the kernel passes the output back to the application, which transfers data from the kernel space to the user space in memory. The application receives the output from the kernel as input. In the source code of a program, this may be a parameter value or a return value within a function. Once the input is received, the application resumes the process. A basic system call, such as getting the system date and time, may take a few nanoseconds. A more advanced system call, such as establishing communication with a network device may require a few seconds. In order to prevent bottlenecks, most operating systems initiate a separate kernel thread for each system call. Modern operating systems are multithreaded, meaning they can process multiple system calls at one time. Updated April 27, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/system_call Copy

System Hardening

System Hardening Most computers offer network security features to limit outside access to the system. Software such as antivirus programs and spyware blockers prevent malicious software from running on the machine. Yet, even with these security measures in place, computers are often still vulnerable to outside access. System hardening, also called Operating System hardening, helps minimize these security vulnerabilities. The purpose of system hardening is to eliminate as many security risks as possible. This is typically done by removing all non-essential software programs and utilities from the computer. While these programs may offer useful features to the user, if they provide "back-door" access to the system, they must be removed during system hardening. Advanced system hardening may involve reformatting the hard disk and only installing the bare necessities that the computer needs to function. The CD drive is listed as the first boot device, which enables the computer to start from a CD or DVD if needed. File and print sharing are turned off if not absolutely necessary and TCP/IP is often the only protocol installed. The guest account is disabled, the administrator account is renamed, and secure passwords are created for all user logins. Auditing is enabled to monitor unauthorized access attempts. While these steps are often part of operating system hardening, system administrators may choose to perform other tasks that boost system security. While both Macintosh and Windows operating systems can be hardened, system hardening is more often done on Windows machines, since they are more likely to have their security compromised. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/systemhardening Copy

System Preferences

System Preferences System Preferences is an application bundled with OS X that allows you to change settings on your Mac. It is similar to the Control Panel in Windows and supersedes the Control Panel that was part of Mac OS Classic. The System Preferences window displays several different options called preferences panes (instead of control panels). It lists the default preference panes included with OS X, as well as any third party preference panes that have been installed. While the default options have changed since the first version of OS X was released in 2001, many of them are the same. In OS X 10.10 (Yosemite), the System Preferences application includes the 30 preference panes listed below. General Desktop & Screen Saver Dock Mission Control Language & Region Security & Privacy Spotlight Notifications Displays Energy Saver Keyboard Mouse Trackpad General Printers & Scanners Sound iCloud Internet Accounts Extensions Network Bluetooth Sharing Users & Groups Parental Controls App Store Dictation & Speech Date & Time Startup Disk Time Machine Accessibility When you click an preference pane icon in System Preferences, it displays controls that allow you to modify certain settings. For example, you can use the Desktop & Screen Saver preference pane to change your desktop background. Energy Saver allows you to modify how long it takes for your computer to go to sleep when idle. The Network preference pane allows you to change the priority of your Mac's network services and lets you manually configure your network connection. You can access System Preferences by clicking the Apple icon on the left side of the menu bar and selecting "System Preferences…" A shortcut to the program is also included in the Dock by default, but it can be removed. OS X also supports several keyboard shortcuts that open individual system preferences. For example, if you have an Apple keyboard, pressing Option + a brightness function key will open the Displays preference while pressing Option + a volume function key will open the Sound preference. Updated December 19, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/system_preferences Copy

System Requirements

System Requirements System requirements are the required specifications a device must have in order to use certain hardware or software. For example, a computer may require a specific I/O port to work with a peripheral device. A smartphone may need a specific operating system to run a particular app. Before purchasing a software program or hardware device, you can check the system requirements to make sure the product is compatible with your system. Typical system requirements for a software program include: Operating system Minimum CPU or processor speed Minimum GPU or video memory Minimum system memory (RAM) Minimum free storage space Audio hardware (sound card, speakers, etc) System requirements listed for a hardware device may include: Operating system Available ports (USB, Ethernet, etc) Wireless connectivity Minimum GPU (for displays and graphics hardware) Minimum vs Recommended Requirements Some products include both minimum and recommended system requirements. A video game, for instance, may function with the minimum required CPU and GPU, but it will perform better with the recommended hardware. A more powerful processor and graphics card may produce improved graphics and faster frame rates (FPS). Some system requirements are not flexible, such as the operating system(s) and disk space required for software installation. Others, such as CPU, GPU, and RAM requirements may vary significantly between the minimum and recommended requirements. When buying or upgrading a software program, it is often wise to make sure your system has close to the recommended requirements to ensure a good user experience. Below is an example of minimum versus recommended system requirements for a Windows application. OS: Windows 7 with SP1;  Recommended: Windows 10 CPU: Intel or AMD processor with 64-bit support;  Recommended: 2.8 GHz or faster processor GPU: nVidia GeForce GTX 1050 or equivalent;  Recommended: nVidia GeForce GTX 1660 or Quadro T1000 Disk Storage: 4 GB of free disk space Monitor Resolution: 1280x800;  Recommended: 1920x1080 Internet: Internet connection required for software activation Updated January 30, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/system_requirements Copy

System Resources

System Resources Your computer has many types of resources. They include the CPU, video card, hard drive, and memory. In most cases, the term "system resources" is used to refer to how much memory, or RAM, your computer has available. For example, if you have 1.0 GB (1024 MB) of RAM installed on your machine, then you have a total of 1024 MB of system resources. However, as soon as your computer boots up, it loads the operating system into the RAM. This means some of your computer's resources are always being used by the operating system. Other programs and utilities that are running on your machine also use your computer's memory. If your operating system uses 300 MB of RAM and your active programs are using 200 MB, then you would have 524 MB of "available system resources." To increase your available system resources, you can close active programs or increase your total system resources by adding more RAM. System resources can also refer to what software is installed on your machine. This includes the programs, utilities, fonts, updates, and other software that is installed on your hard drive. For example, if a file installed with a certain program is accidentally removed, the program may fail to open. The error message may read, "The program could not be opened because the necessary resources were not found." As you can see, the term "system resources" can be a bit ambiguous. Just remember that while it usually refers to your computer's memory, it can be used to describe other hardware or software as well. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/systemresources Copy

System Software

System Software System software refers to the files and programs that make up your computer's operating system. System files include libraries of functions, system services, drivers for printers and other hardware, system preferences, and other configuration files. The programs that are part of the system software include assemblers, compilers, file management tools, system utilites, and debuggers. The system software is installed on your computer when you install your operating system. You can update the software by running programs such as "Windows Update" for Windows or "Software Update" for Mac OS X. Unlike application programs, however, system software is not meant to be run by the end user. For example, while you might use your Web browser every day, you probably don't have much use for an assembler program (unless, of course, you are a computer programmer). Since system software runs at the most basic level of your computer, it is called "low-level" software. It generates the user interface and allows the operating system to interact with the hardware. Fortunately, you don't have to worry about what the system software is doing since it just runs in the background. It's nice to think you are working at a "high-level" anyway. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/systemsoftware Copy

System Unit

System Unit The system unit, also known as a "tower" or "chassis," contains the main components of a desktop computer. It includes the motherboard, CPU, RAM, and other components. The case that houses these components is also part of the system unit. Peripheral devices, such as the monitor, keyboard, and mouse are separate from the system unit. Peripherals, combined with the system unit, create a "workstation." Some modern computers, such as the iMac, combine the system unit and monitor into a single device. In this case, the monitor is part of the system unit. While laptops also have built-in displays, they are not considered system units, since the term only refers to desktop computers. Updated December 12, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/system_unit Copy

Systray

Systray The systray, short for "system tray," is a section of the Windows taskbar, typically located on the right side near the clock. It displays small icons for system and application notifications, providing quick access to various system controls and background applications. The systray helps keep you informed through notifications and offers easy access to controls and settings without opening full applications. Common icons in the systray include network status, volume control, battery life, and antivirus notifications. You can also find icons for background applications like cloud storage services, messaging apps, and system utilities. For example, the Windows Update icon displays an orange dot when a system update is available. You can click the icon to learn more about the latest update and install it. Windows 11 systray with extra icons enabled Modern versions of Windows, such as Windows 11, allow you to customize the systray by adding icons and hiding less frequently used ones. Right-clicking on these icons often provides quick access to contextual menus for settings or actions related to the associated application. Updated January 7, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/systray Copy

T1

T1 Stands for "Transmission System Level 1." A T1 line is a telecommunications line that supports 24 voice channels when used for telephone service, or a total bandwidth of 1.544 Mbps when used as an Internet connection. T1 lines typically operate over twisted pair copper wire, similar to a telephone wire, but can also run over coax cable or fiber optic cable. T1 lines are also known as DS1 (Digital Signal Level 1) lines. Telecoms first deployed T1 lines in the 1960s for the digitized transmission of multiple telephone lines over a single connection, known as a trunk line, between telephone switching centers. Telecoms later offered dedicated T1 lines to businesses, supplying up to 24 phone lines over a single connection. Eventually, T1 lines provided those businesses with constant, reliable Internet access. Customers that needed more bandwidth could purchase a bonded T1 line, offering twice the bandwidth of a single T1. As other forms of high-speed internet became available, T1 lines declined in popularity. Some ISPs who offered T1 lines for decades are discontinuing them as part of a general retirement of copper wire networks. NOTE: While 1.544 Mbps might not seem like a fast Internet connection now, a T1 line was a high-speed connection compared to the slow dial-up speeds of the 1980s and 1990s. It was also a dedicated, always-on connection that was not prone to dropping out when a phone call came in. Updated October 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/t1 Copy

T3

T3 Stands for "Transmission System Level 3." A T3 line is a telecommunications standard that provides the equivalent bandwidth of 28 T1 lines, with 672 individual voice channels and 45 Mbps of Internet bandwidth. A T3 connection operates over a coax cable, but limited segments can also use a fiber optic cable. A T3 line is also known as a DS3 (Digital Signal Level 3) line. T3 lines serve as telephone network trunk connections, connecting telephone switching centers to transmit digitized voice calls and other data from place to place. ISPs also offer T3 lines as dedicated Internet connections for businesses, universities, and other institutions. T1 and T3 lines are different levels of the same digital transmission system developed by Bell Laboratories, also known as the T-carrier system. The specification has five levels, T1 through T5, with each level several times faster than the previous level. Only T1 and T3 saw widespread adoption. Newer network technologies have led to a decline in the popularity of T3 lines. Their copper wire backbone is not as fast as the fiber-optic cables replacing them, with some ISPs retiring their copper wire networks entirely. NOTE: While 45 Mbps is a modest Internet speed now, a T3 line was the fastest Internet connection available during the dial-up era of the 1980s and 1990s. Updated October 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/t3 Copy

Tab

Tab In computing, "tab" has two different meanings. It may refer to 1) a key on the keyboard or 2) a document header inside a window. 1. Tab Key The tab key is located on the left side of the keyboard next to Q and enters the tab character (ASCII code: 9) when pressed. If you are typing in a word processor, pressing Tab will move the cursor to the next "tab stop." Tab stops are fixed horizontal distances that are typically four to eight spaces long. Pressing Tab at the beginning of a line can be used to indent text, such as the beginning of a paragraph in a text document or a line of source code in a program. Pressing Tab between blocks of text can be used to horizontally align words in multiple lines. The tab key is also used to highlight or "focus" different fields within a form. When filling out a form in a software program or website, you can press Tab to jump to the next field. For example, you might enter your first name in the "First Name" field, then press Tab to highlight the "Last Name" field. This is called "tabbing" through the fields and allows you to quickly fill out forms without using a mouse. If you need to go backwards, pressing Shift+Tab will typically highlight the previous field. 2. Window Tab A window tab is a user interface element that allows you to navigate between multiple documents in a single window. Instead of a single title bar, a tabbed window may include multiple tabs along the top. Clicking one of the tabs will display the contents of the corresponding document. Tabs have become a common feature in web browsers since they make it possible to have several webpages open without cluttering your screen. The tabbed document interface (TDI) has also become standard in other programs such as Adobe Photoshop and the OS X Finder. While tabbed windows create a cleaner appearance, the drawback is that you can only view the contents of one tab at a time. Therefore, some programs allow you to click and drag a tab to create a new window from the tab. Updated February 19, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tab Copy

Table

Table A table is a data structure that organizes information into rows and columns. It can be used to both store and display data in a structured format. For example, databases store data in tables so that information can be quickly accessed from specific rows. Websites often use tables to display multiple rows of data on page. Spreadsheets combine both purposes of a table by storing and displaying data in a structured format. Databases often contain multiple tables, with each one designed for a specific purpose. For example, a company database may contain separate tables for employees, clients, and suppliers. Each table may include its own set of fields, based on what data the table needs to store. In database tables, each field is considered a column, while each entry (or record), is considered a row. A specific value can be accessed from the table by requesting data from an individual column and row. Websites often use tables to display data in a structured format. In fact, HTML has a tag, as well as (table row) and (table data) tags for specifying rows and columns. Since many tables use the top row for header information, HTML also supports a tag used to define the cells in the header row. By including tables in a webpage, large amounts of data can be displayed in a easy-to-read format. In the early days of HTML, tables were even used to construct the overall layout of webpages. However, cascading style sheets (CSS) are now the preferred means of designing webpage layouts. Spreadsheets both store data and and display data in a table format. Programs like Microsoft Excel and Apple Numbers provide a grid, or matrix of cells in which users can enter data. Each cell is defined by a specific row/column pair, such A3, which refers to the cell in the first column and third row of the table. By formatting data in tables, spreadsheet applications provide a simple way to both enter data and share data with others. Updated June 6, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/table Copy

Tablet

Tablet A tablet, or tablet PC, is a portable computer that uses a touchscreen as its primary input device. Most tablets are about the same size as a small laptop, between 8 and 13 inches (20 to 33 cm) diagonally. Many tablets, like the Apple iPad and Microsoft Surface, can connect to keyboards and use special styluses in addition to the touchscreen input. Since tablet computers don't use mice and keyboards as their primary input methods, the user interface of a tablet is different than that of a laptop. For example, instead of double-clicking an icon to launch an application or open a file, you can tap its icon once. You can scroll by dragging a page directly instead of using a scroll bar. When you need to enter text, an on-screen keyboard appears for you to type on. A Microsoft Surface Pro 7 tablet, with the optional Surface Pen The touchscreens used by tablets support multitouch input, letting you interact with the tablet using gestures designed for multiple fingers — for example, zooming in on something by spreading two fingers apart or switching between apps by swiping with multiple fingers. Some tablets also support special styluses (like the Apple Pencil or Microsoft Surface Pen), which connect wirelessly and let you select objects on the screen, write text, and draw in apps. However, if you attach an optional keyboard and touchpad to a tablet, you can use it just like you would a laptop. Some computers, known as "convertible tablets" or "convertible laptops," combine elements of tablets and laptops into a single device. A convertible laptop includes a built-in keyboard and touchpad like any other laptop but also uses a touchscreen display. It can fold open to function as a laptop, but the keyboard can also fold all the way around to the back of the screen. When fully opened, the keyboard and touchpad are disabled, and the convertible laptop functions instead as a tablet. Updated January 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tablet Copy

Tag

Tag On clothes, tags usually indicate the brand, size of the garment, fabrics used, and the washing instructions. In Web pages, tags indicate what should be displayed on the screen when the page loads. Tags are the basic formatting tool used in HTML (hypertext markup language) and other markup languages, such as XML. For example, the tag is used to insert a table on a webpage. The data that should be inside the table follows the tag, and the table is closed with a tag. If you want text to be bold in a block of text, you would use the bold tag. For example, the HTML: This site is the best website ever! would show up as: This site is the best website ever! Since there is often a need to format content within more general tags, the tags can be "nested," meaning one tag can enclose one or more other tags. For example: This is the Times font, and this is in italics. would appear as: This is the Times font, and this is in italics. Tags are a fundamental part of HTML. If you want to build a website of your own, you can create pages from scratch using a text editor and typing your own tags. Or you can use a "WYSIWYG" web development program like Adobe Dreamweaver, which provides a GUI for creating webpage layouts and generates the tags for you. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tag Copy

Tape Drive

Tape Drive A tape drive is a type of storage device that reads and writes data to removable cartridges. These cartridges contain spools of magnetic tape, similar to a VHS tape or audio cassette. Tape drive cartridges have a high data storage capacity but must be written and read sequentially, which makes them best suited for creating full-disk backups for archival and storage. Even though other types of magnetic tape media are no longer common, tape drives have several advantages that keep them in active use for long-term data backups. First, tape cartridges are less expensive per terabyte than hard disk drives and solid-state drives. Tape drives and cartridges are also less prone to failure than hard disk drives. Tape cartridges are removable, allowing you to keep rolling backups on separate cartridges. Finally, when stored in a stable environment at the right temperature and humidity, tape cartridges can store data for up to 30 years, making them ideal for long-term cold storage. HP Enterprise LTO-9 tape drive The most common form factor for tape cartridges is Linear Tape-Open (LTO), which stores data on half-inch magnetic tape. Multiple heads in the drive read and write data in parallel lines running the length of the tape at high speed. The ninth generation of LTO, LTO-9, has a maximum capacity of 18 terabytes uncompressed or 45 terabytes with data compression. Updated June 19, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tape_drive Copy

Target Disk Mode

Target Disk Mode Target disk mode is a way of booting a Macintosh computer so that it acts as an external hard drive on another computer. When a Mac is booted in target disk mode, the typical boot sequence is bypassed and the operating system is not loaded. Instead, the computer's internal and external hard drives are simply mounted on a connected computer. Target disk mode can be used to manually transfer files between two machines or to copy data from one computer to another using Apple's Migration Assistant. In order for target disk mode (TDM) to work, two Macintosh computers must be attached to each other via a Firewire cable. One computer should be on and the computer designated for TDM should be off. To boot into target disk mode, hold the "T" key on the keyboard immediately after turning on or restarting the computer. After a few seconds, the screen should display the Firewire icon, which will move around the screen as long as the machine is in target disk mode. You should then see the hard drive(s) of the computer in TDM appear on the Desktop of the connected computer. Booting a computer in target disk mode makes it easy to transfer files between two machines. Since the hard drives of the computer in TDM automatically mount on the other Mac's desktop, you can simply drag and drop files between them. Also, the computer in target disk mode is not seen as a boot disk, so you don't have to worry about file permissions. You can also run more comprehensive disk diagnostics and repairs. Just be sure not to remove or copy over any important system files on the TDM hard drive(s), since there are no safeguards to protect you from doing so. Since the hard drives of the TDM machine are mounted on the connected computer, you should make sure to unmount or "eject" the hard drives before you turn off the computer. This can be done by selecting the hard drive on the desktop and choosing "Eject" from the File menu. Once the hard drives are ejected, you can safely turn off the TDM computer. When you turn on the computer again, it should boot normally (as long as you don't hold down the "T" key). Any files you copied to the computer's hard drives should appear in the directories you copied them to. Updated December 9, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/targetdiskmode Copy

Taskbar

Taskbar The taskbar is a major GUI element in Windows that spans the lower edge of the desktop. It helps users quickly switch between open windows and applications with a single click. The taskbar includes icons for every application running on the computer, as well as the Windows Start button (on the left side) and the system tray (on the right). Users may also pin icons to it to create shortcuts to applications they use frequently. The majority of the taskbar consists of a row of icons that represent both the application windows that are currently open and any other applications that you have chosen to pin to the taskbar. You can click a taskbar icon to launch that application, to bring its windows to the front, or to restore a minimized or hidden window. If an app has several windows open, clicking its taskbar icon will show thumbnails for each open window that you can choose from. You can click and drag icons on the taskbar to rearrange them. You can pin an application's icon to the taskbar while it is running by right-clicking it and selecting Pin to taskbar; you can remove an icon by right-clicking it and selecting Unpin from taskbar. The Windows 10 taskbar with several open and pinned application icons The Windows taskbar has seen several changes since its introduction in Windows 95. Windows XP added taskbar grouping, which condensed windows from the same application into a single button on the taskbar. Windows 7 removed the text labels from taskbar buttons to save space and allowed you to pin apps to the taskbar directly (instead of to a separate Quick Launch toolbar). Windows 10 added a search field and the Task View button to the taskbar (next to the Start menu). Windows 11 changed the look of the taskbar even further by changing the default alignment from left-aligned to center-aligned. NOTE: Other operating systems include similar task switchers that allow users to switch between app windows. The macOS dock displays icons for open applications and pinned shortcuts, appearing as a floating element that adjusts its size as necessary. Several Linux desktop environments (like KDE Plasma and Unity) also use a taskbar-style application switcher on the bottom or side of the desktop. Updated August 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/taskbar Copy

TCF

TCF Stands for "Transparency Consent Framework." The TCF is an open-standard framework that helps websites and apps obtain consent from their users to collect personal data for online ad targeting and other purposes. It was developed by IAB Europe — a trade association for online advertisers — to help website owners comply with the privacy requirements in the General Data Protection Regulation (GDPR). Instituted in 2018, the GDPR is designed to protect the personal data of EU residents, and requires websites and apps that collect, store, and process user data to first ask for consent. The TCF helps website and app owners comply with the GDPR by providing clear guidelines on asking users for consent, and what information they must disclose during the process. By laying this information out clearly, the TCF can help provide a consistent experience across websites and apps that complies with the law. A typical TCF user consent request The TCF also includes a set of technical specifications for implementing those policies. It lays out requirements for Consent Management Platforms (CMPs) — the software tools that help track when users consent to store their data. CMPs provide a user interface that asks for consent, and a database that records when users grant it. Another aspect of the TCF tracks third-party vendors with access to personal data, like advertising networks and data analytics firms. It maintains a global vendor list of companies that adhere to its guidelines and allows developers to choose which vendors they work with. The TCF also outlines how to consistently encode and transmit data using a "transparency and consent string." This string packs in data about the user's consent choices, including what data they consent to share and which type of vendor it may be shared with. The personal data covered by the TCF generally includes anything that could identify a website or app's user, particularly when it involves tracking and targeted advertising. Some examples of personal data include device information (like the operating system or web browser), IP address, geolocation data, user ID, and demographic information like age and gender. Updated August 17, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tcf Copy

TCP

TCP Stands for "Transmission Control Protocol." TCP is a fundamental protocol within the Internet protocol suite — a collection of standards that allow systems to communicate over the Internet. It is categorized as a "transport layer" protocol since it creates and maintains connections between hosts. TCP compliments the Internet protocol (IP), which defines IP addresses used to identify systems on the Internet. The Internet protocol provides instructions for transferring data while the transmission control protocol creates the connection and manages the delivery of packets from one system to another. The two protocols are commonly grouped together and referred to as TCP/IP. When data is sent over a TCP connection, the protocol divides it into individually numbered packets or "segments." Each packet includes a header that defines the source and destination and a data section. Since packets can travel over the Internet using multiple routes, they may arrive at the destination in a different order than they were sent. The transmission control protocol reorders the packets in the correct sequence on the receiving end. TCP also includes error checking, which ensures each packet is delivered as requested. This is different than UDP, which does not check if each packet was successfully transmitted. While the built-in error checking means TCP has more overhead and is therefore slower than UDP, it ensures accurate delivery of data between systems. Therefore TCP is used for transferring most types of data such as webpages and files over the Internet. UDP is ideal for media streaming which does not require all packets to be delivered. Updated January 30, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tcp Copy

TCP/IP

TCP/IP Stands for "Transmission Control Protocol over Internet Protocol." TCP/IP is the standard suite of protocols used by computers and other devices to communicate over networks. This protocol suite serves as the basic foundation of the Internet. As the abbreviation implies, the TCP/IP suite includes both the Transport Control Protocol (TCP) and the Internet Protocol (IP). Other protocols, such as the User Datagram Protocol (UDP), are also considered part of the standard suite. The TCP and IP protocols work in tandem to send data from one host to another. First, each host (as well as every other network-connected device) is assigned a unique IP address. TCP establishes a connection between the hosts using their IP addresses, then breaks data into packets and sends them over the Internet. Routers then direct the data from one network to another across a series of interconnections (known as the Internet's backbone) using the IP addresses in the data packet headers. Once it reaches its destination, TCP reassembles the data packets, then confirms receipt of the data back to the other host over IP. Data traveling from one host to another is first sent on the Application layer, broken up and addressed by the Transport layer, routed by the Internet layer, then physically travels over the Link layer before being received, reassembled, and delivered to the proper application on the destination host device While the standard Internet protocol suite is named after TCP and IP, it includes other protocols that operate across four different layers. Each layer is built upon the layers below it. The Link Layer is the lowest layer, representing the physical network that connects computers and routers. This layer handles the actual transmission of digital data between devices through network cables (like Ethernet, coax, and fiber) and wireless radio (like Wi-Fi and cellular). The Internet Layer handles the routing of data between networks. Every device connected via the Link layer is assigned an IP address, which the Internet Protocol uses to direct data across the Link layer from its origin to its destination. The Transport Layer handles communications between two hosts connected over the Internet layer. TCP and UDP are the two most common Transport layer protocols. They break the data into small pieces, add headers to direct the data across the Internet, then reassemble the data on the destination computer. The Application Layer is the highest-level layer. Protocols like HTTP, SMTP, and FTP send data over the Transport layer to a specified port on the other host, which identifies which application should handle the incoming data. For example, incoming data packets meant for a web browser are delivered to the web browser, not to an email client or music streaming app. Updated October 24, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tcpip Copy

Tebibyte

Tebibyte A tebibyte (abbreviated "TiB") is a data storage unit of measurement equal to 240 bytes, or 1,099,511,627,776 bytes. The "tebi" prefix is intended to remove ambiguity compared to multiple definitions of "terabyte," which have evolved over time. A tebibyte is slightly larger than a terabyte (TB), which is 1012 bytes (1,000,000,000,000 bytes). A terabyte is roughly 0.9 tebibytes, and one tebibyte is approximately 1.1 terabytes. 1,024 gibibytes equals one tebibyte, and 1,024 tebibytes make up a pebibyte. Why Use Tebibytes? Computer scientists first defined a kilobyte using the base 2 value of 210 bytes (1,024 bytes). Today, a kilobyte is defined using the base 10 value of 1,000 bytes. A megabyte is 1,000,000 bytes, a gigabyte is 1,000,000,000 bytes, and a terabyte is 1,000,000,000,000 bytes. Conversely, a mebibyte is 1,048,576 bytes, a gibibyte is 1,073,741,824 bytes, and a tebibyte is 1,099,511,627,776 bytes. While the difference between a kilobyte and a kibibyte is roughly 2%, a tebibyte is almost 10% larger than a terabyte. Therefore, it is essential to distinguish between the two units of measurement when describing large amounts of data. NOTE: For a list of data storage units of measurement, view this Help Center article. Updated November 17, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tebibyte Copy

Technical Debt

Technical Debt Technical debt is a burden software developers face from old source code and architectural decisions. It can range from poorly-written functions to frameworks that are no longer supported. In some cases, technical debt can be handled by rewriting code, while in others, developers may need to rewrite entire programs. Avoiding Technical Debt The best way to manage technical debt is to avoid accumulating it in the first place. This begins with architecting a program with a long-term plan. A wise software engineer will design an application for the future, not just the current state of software and hardware. Choosing "future-proof" programming languages and reliable frameworks are two important decisions in the architectural process. It is also necessary to follow good coding practices to avoid technical debt. For example, if the same logic is repeated multiple times within a program, it should be consolidated into a single class or function. Otherwise, it will be more difficult to locate and update each instance in the future, especially if new developers are working on the project. Commenting code is also essential for reviewing and updating code in the future. Managing Technical Debt The two main ways to manage technical debt are to rewrite sections of code or start from scratch. The best route depends on the size of the project and the amount of technical debt. 1. Rewriting code Rewriting or "refactoring" code is the most common way to handle technical debt. For example, a senior developer may review code written by a junior developer and find ways to optimize performance. It may also be necessary to update deprecated functions so that the code can run on a newer platform. In some cases, large portions of code may need to be rewritten to work with a new API. These types of updates are common in the programming world and are one of the reasons why developers release new versions. 2. Starting from scratch If the amount of technical debt is large enough, it may require more time to update the code than to write new a new program from scratch. In some cases, it may be necessary to re-code an application in a new programming language that is compatible with modern software compilers. Rebuilding an app from scratch is a significant investment, but it may provide additional benefits. For example, a new application can take advantage of the latest hardware technologies and modern user interface elements. Updated June 20, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/technical_debt Copy

Technology Services

Technology Services Technology services are, not surprisingly, services that involve technology. These include information technology, or IT, services, such as technical support, computer networking, systems administration, and other services. Common Internet services, such as Web hosting, e-mail, and social networking websites also fall under the scope of technology services. Therefore, the terms "technology services" and "information technology services" (ITS) are often used interchangeably. However, technology services may also include services not directly related to information technology, such as telephone and cable TV services. Other industries, such as a digital photography, graphic design, and video production may be considered technology services, since they involve modern technology. "Technology Services" (capitalized) may also refer to a division within a school or business that deals with technology maintenance and administration. Because technology services covers such a broad range of industries and occupations, its scope cannot be easily quantified. However, most of us use some form of technology services on a daily basis, which is why it is an important term to know. Updated November 10, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/technologyservices Copy

Telecommunications

Telecommunications Telecommunications, or telecom, is the transmission of signals over long distances. It began with the invention of the telegraph in 1837, followed by the telephone in 1876. Radio broadcasts began the late 1800s and the first television broadcasts started in the early 1900s. Today, popular forms of telecommunications include the Internet and cellular phone networks. Early telecommunications transmissions used analog signals, which were transferred over copper wires. Today, telephone and cable companies still use these same lines, though most transmissions are now digital. For this reason, most new telecommunications wiring is done with cables that are optimized for digital communication, such as fiber optic cables and digital phone lines. Since both analog and digital communications are based on electrical signals, transmitted data is received almost instantaneously, regardless of the distance. This allows people to quickly communicate with others across the street or across the globe. So whether you're watching TV, sending an email to a coworker, or talking on the phone with a friend, you can thank telecommunications for making it possible. Updated August 8, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/telecommunications Copy

Teleconference

Teleconference A teleconference is a meeting of people in different locations using telecommunications. A basic teleconference may only include audio, while other types of teleconferences may include video and data sharing. Some common examples include telephone conferences, videoconferences, and web meetings. The most simple and widely used type of teleconference is a phone conference, since the only equipment required is a speakerphone. If a project needs to be discussed between more than two people in different locations, they can all dial a conference phone number that will allow everybody to talk to each other at the same time. A board meeting may also double as a teleconference if board members attend remotely. Many board rooms include a conferencing system that allows people to "teleconference in" to the meeting by calling the conference phone number. This allows remote board members to actively participate in a board meeting even if they are not physically present. Teleconferences may also include live video. These types of teleconferences, often called videoconferences, allow people to see each other in real-time during a remote meeting. A videoconference may be one-way, where one remote user or group is displayed on a video feed, or two-way, where both sides can see each other. In the early days of videoconferencing, expensive equipment was required in order to set up a videoconference. Today, you can simply use a computer's built-in video camera and free software such as Skype to videoconference with other users. Thanks to the Internet, teleconferences may also include data sharing. These types of conferences often use web browsers as the user interface and are therefore called web meetings. For example, you can use a web-based service like Join.me to share your screen with other users. You can even let another person control your screen, which is great for remote troubleshooting. Businesses may use a web-based service like WebEx to host an online presentation for hundreds or even thousands of attendees. When hosting a web presentation, the presenter controls the screen and is typically the only one who can be heard during the meeting. Attendees are often provided with a chat window in which they can ask questions and interact with the presenter. Updated October 9, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/teleconference Copy

Telehealth

Telehealth Telehealth is an umbrella term that encompasses health services provided using telecommunications technologies. It includes everything from the electronic delivery of patient health information and prescriptions to remote interactions between patients and doctors in different locations. Telehealth has existed for decades, dating back to the invention of the telephone in the late 1800s. It first enabled doctors and nurses to communicate over long distances and provide medical information to their patients over the phone. The telephone provided a significant boost in medical care because patients were no longer limited by the expertise of doctors in their local area. As technology has progressed, so has telehealth. For example, the fax machine made it possible to transfer medical information between hospitals in minutes instead of days. Pagers helped notify doctors and surgeons in emergency situations. Modern technologies like smartphones and the Internet make it possible for medical professionals and patients to access health information anytime, anywhere. Online services like MyChart and AthenaNet provide patients with direct access to their own health records, including test results, scans, and medications. Remote Healthcare While telehealth encompasses many types of medical services, it is often used to describe remote healthcare. For example, a telehealth doctor may "visit" patients remotely using videoconferencing technology. Patients can book remote appointments and a doctor can evaluate their condition over the video feed. A more advanced type of telehealth, called telesurgery, allows doctors to perform remote surgery using a specialized robotic system. Updated May 12, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/telehealth Copy

Telemetry

Telemetry Telemetry is an automated process that collects data from one device and transmits it to another for monitoring and analysis. Telemetry is widely used in the scientific and medical fields to gather data from remote or wireless sensors. In IT and software development, it refers to data gathered by applications and transmitted from one computer to another to monitor performance, diagnose problems, and provide development feedback. There are several reasons for software developers to gather telemetry data. Many applications and operating systems collect diagnostic data like log files and crash reports. This data helps developers fix problems by providing specific and accurate information automatically instead of waiting for affected users to reach out. Developers may also gather data to help improve the usability of their software. By gathering statistical information about how users interact with their applications, developers can make informed decisions about what features to prioritize. They can also compile information about users and their platforms, like what operating system they use, their screen size and resolution, and the amount of storage and memory available. Diagnostic telemetry settings in Windows 10 The collection of telemetry data does involve some privacy concerns. Telemetry may gather some of your personal data, including your geographic location or what websites you visit. For the sake of user privacy, telemetry gathering should be an opt-in setting configured during installation or when the application is first run. Updated May 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/telemetry Copy

Telnet

Telnet Telnet is a protocol for connecting from one computer to another using a text-based command line interface. Telnet connections can be made over a local network or over the Internet. Making a Telnet connection requires a client application on the local computer (where commands are typed) and a Telnet server running on the remote computer (where the commands are carried out). Once connected, the command line on the client will be treated as if it were on the server, accepting input and displaying output. A telnet client connected to a telnet server The Telnet protocol was first developed in 1969 to enable computer users at universities and research institutions to use their room-sized mainframe computers from smaller, more conveniently-located computer terminals. Since it was designed for use on a local network, before the Internet even existed, it does not encrypt its traffic. Information, including user names and passwords, is sent as plain text and can be easily read if intercepted by a third party, so Telnet is no longer widely used. Other protocols, like SSH, are now used instead. Updated September 22, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/telnet Copy

Template

Template A template is a file that serves as a starting point for a new document. Templates contain placeholder fields you can fill in and professionally-designed styles and layouts. Word processors, presentation programs, desktop publishing programs, and some website hosts often include template galleries you can browse when creating a new file. Using a template can save you a lot of time by doing a lot of the design work for you. For example, starting a new presentation using a PowerPoint template will automatically generate text placeholders for titles, subtitles, and slide content; it will also include design elements like backgrounds, shapes, font combinations, and color schemes. You can use the application's design tools to modify the template's design to personalize it while also saving time by not starting from scratch. Many applications save templates as a unique file type. When you create a new file from a template, the application will prompt you to save it as a new file instead of overwriting the template. Many applications also allow you to create your own templates by designing a document, creating placeholder objects, and saving it as the application's template file type. Updated February 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/template Copy

Terabyte

Terabyte A terabyte is 1012 or 1,000,000,000,000 bytes. One terabyte (abbreviated "TB") is equal to 1,000 gigabytes and precedes the petabyte unit of measurement. While a terabyte is exactly 1 trillion bytes, in some cases terabytes and tebibytes are used synonymously, though a tebibyte actually contains 1,099,511,627,776 bytes (1,024 gibibytes). Terabytes are most often used to measure the storage capacity of large storage devices. While hard drives were measured in gigabytes for many years, around 2007, consumer hard drives reached a capacity of one terabyte. Now, all hard drives that have a capacity of 1,000 GB or more are measured in terabytes. For example, a typical internal HDD may hold 2 TB of data. Some servers and high-end workstations that contain multiple hard drives may have a total storage capacity of over 10 TB. Terabytes are also used to measure bandwidth, or data transferred in a specific amount of time. For example, a Web host may limit a shared hosting client's bandwidth to 2 terabytes per month. Dedicated servers often have higher bandwidth limits, such as 10 TB/mo or more. NOTE: You can view a list of all the units of measurement used for measuring data storage. Updated November 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/terabyte Copy

Teraflops

Teraflops Teraflops is a unit of measurement used for measuring the computing performance of a processor's floating point unit. It is equal to 1,000 gigaflops, or 1,000,000,000,000 FLOPS. Teraflops may also be written as "teraFLOPS" or "TFLOPS." The term "teraflops" is both singular and plural, since FLOPS is an acronym for "Floating Point Operations Per Second." While FPU performance has historically been measured in gigaflops, some modern processors run over 1,000 FLOPS, which is why new measurements are often displayed in teraflops. A similar transition has taken place with clock speeds, which are no longer measured in megahertz, but in gigahertz. Teraflops are often used to measure supercomputer performance and the rate of scientific calculations, which are based primarily on floating point operations. Updated January 7, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/teraflops Copy

Terahertz

Terahertz Terahertz is a unit of measurement sometimes used to measure computer clock speeds. One terahertz is equal to 1,000 gigahertz (GHz), or 1,000,000,000,000 hertz (Hz). Since the majority of personal computers operate between two and four gigahertz, most computer clock speeds are not measured in terahertz. Instead, terahertz is more often used to measure the total speed of computing clusters or supercomputers. Like gigahertz, terahertz only measures frequency, or cycles per second. Since some processors require more cycles to process instructions than others, terahertz is not always an accurate measurement of overall computing power. Additional factors, such as RAM speed, bus speed, and processor cache, also effect a computer's performance. Therefore, other units of measurements, such as MIPS and FLOPS are typically used to measure the computing performance of supercomputers and other high-end computer systems. Abbreviation: THz. Updated October 12, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/terahertz Copy

Terminal

Terminal "Terminal" may refer to two separate but related tools — a hardware terminal that provides a text-based user interface to a remote computer, or a software program that emulates a hardware terminal. Both tools allow remote users to connect to a computer to run programs. Hardware Terminal Computers in the 1970s were large, expensive machines that were often used in a time-sharing system. Time-sharing allowed multiple people to use a computer at once, splitting its computing resources between everyone and maximizing the amount of work it could do in a day. Users connected to the computer using terminals — simple, low-power devices consisting of a monitor and keyboard, without any built-in computing power. A terminal connected to a remote computer relayed text commands from the user to the computer, then displayed output from the computer on its screen. Terminals were the primary way most users interacted with computers throughout the 1970s and early 1980s. With the rise of affordable microcomputers in the 1980s, it became more common to interact directly with your own PC instead of sharing a remote computer. Terminal Emulator A terminal emulator is a program that provides a text-based command line interface to a local or remote computer. Terminal emulators simulate how hardware terminals send text commands, but also include additional features like the ability to customize the terminal's appearance or to send the text received by the terminal to a printer. Some terminal emulators support window tabs, allowing you to connect to more than one computer at a time. The macOS Terminal application Most desktop operating systems with graphical user interfaces have terminal emulators to provide a command line within a window. For example, macOS includes a terminal emulator called "Terminal" that allows users to enter Unix commands and connect to remote computers using SSH. Windows comes with two command line programs, "cmd.exe" and "PowerShell," and Unix computers that use the X Window graphical shell include a terminal emulator called "xterm." There are many third-party terminal emulators available as well, if the built-in options don't meet your needs. Updated March 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/terminal Copy

Text Alignment

Text Alignment Most word processing programs give you the capability to change the text alignment of a block of text. This setting determines how the text is displayed horizontally on the page. The four primary types of text alignment include left aligned, right aligned, centered, and justified. Left Aligned - This setting is often referred to as "left justified," but is technically called "flush left." It is typically the default setting when you create a new document. Left aligned text begins each line along the left margin of the document. As you type, the first word that does not fit on a line is placed at the left margin on the next line. This results in a straight margin on the left and a "ragged edge" margin on the right. Right Aligned - This setting is also called "right justified," but is technically known as "flush right." It aligns the beginning of each line of text along the right margin of the document. As you type, the text expands to the left of the cursor. If you type more than one line, the next line will begin along the right margin. The result is a straight margin on the right and a "ragged edge" margin on the left. Right justification is commonly used to display the company name and address near the top of a business document. Centered - As the name implies, centered text is placed in the center of each line. As you type, the text expands equally to the left and right, leaving the same margin on both sides. When you start a new line, the cursor stays in the center, which is where the next line begins. Centered text is often used for document titles and may be appropriate for headers and footers as well. Justified - Justified text combines left and right aligned text. When a block of text is justified, each line fills the entire space from left to right, except for the paragraph indent and the last line of a paragraph. This is accomplished by adjusting the space between words and characters in each line so that the text fills 100% of the space. The result is a straight margin on each side of the page. Justified text is commonly used in newspapers and magazines and has become increasingly popular on the Web as well. In most word processors, the text alignment options are typically located in the program's primary toolbar. They are often displayed as a row of four icons, which include the left, centered, right, and justified alignment options. These options may also be available in the program's Format menu. You can either select the appropriate setting before you begin typing, or select a block text and choose the text alignment to apply the new setting. If you want to apply a new text alignment to an entire document, use the Edit → Select All command, then select the alignment you want to use. Updated February 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/text_alignment Copy

Text Box

Text Box A text box is a rectangular area on the screen where you can enter text. It is a common user interface element found in many types of software programs, such as web browsers, email clients, and word processors. When you click in a text box, a flashing cursor is displayed, indicating you can begin typing. If you are using a tablet, tapping inside a text box may automatically display the on-screen keyboard. There are two different types of text boxes — text fields and text areas. A text field is small box that allows you to enter a single line of text. It is used for entering basic values, such as a name, number, or short phrase. A text area is a larger box that allows you to enter multiple lines of text. Generally, pressing Enter in a text area will enter a newline character, creating a line break. When you press Enter while typing in a text field, either the cursor will jump to the next field or the value will be submitted. In HTML, a text field is defined by an tag with the type "text." A text area is defined by a tag. Text fields are the most common, as they are used for entering queries in search engines and website search boxes. However, many Web forms, such as user registration and website contact pages often contain both types of text boxes. Regardless of what text boxes an online form contains, you can jump from one to the next by pressing the Tab key. Updated December 26, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/text_box Copy

Text Editor

Text Editor A text editor is a software application that creates and edits plain text files. In addition to simply typing text directly, they can cut, copy, and paste text from other sources and quickly find and replace text within a file. Text editors lack the text formatting features of word processors, which use rich text files instead. Plain text cannot change the font, color, size, or layout of text on a page, nor can images be embedded. Software and website developers often use plain text editors to write, edit, and review source code. Despite the limitations of plain text, text editors can include advanced features that change how text looks in the editor. Many text editors can customize the appearance of text in a way that does not affect the text file itself, allowing the user to write in whatever font they choose. Some text editors can automatically highlight the syntax of different programming languages, changing font color and formatting of certain elements to make code easier to read. For example, a text editor can change the color of HTML tags, comments, or function names to help them stand out. These formatting changes do not alter the text itself, only appearing as helpful guides for users. Coders can even customize syntax highlighting to fit their preferences. An HTML file in the BBEdit text editor with tags and links highlighted Text editors can also open and edit extremely large text files far easier than a word processor since a text editor does not have to account for the additional overhead of text formatting and page layout. Many text editors also support using regular expressions to search and manipulate the text within a file. Updated October 10, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/texteditor Copy

TFT

TFT Stands for "Thin Film Transistor." A TFT is a type of transistor used in active-matrix LCD screens. TFT LCD screens use a separate transistor to control each pixel in the display. They allow the electrical current that controls each pixel to turn on and off quickly, which decreases response time and makes on-screen motion smoother. TFT LCDs are often used as computer monitors, televisions, mobile phone screens, and other flat-panel color displays. The name "Thin Film Transistor" is derived from the manufacturing process. The manufacturer first applies thin films of a semiconductor (like amorphous silicon) and dielectric materials to a flat, non-conductive surface (like glass). Unneeded silicon is etched away, leaving only a grid of transistors and the transparent glass surface. These transistor panels are thin enough to fit between a polarized backlight and the layer of liquid crystals. These transistors apply an electrical current to the liquid crystals, altering their arrangement to block light in certain ways. The light then passes through other layers of the screen, including a color filter and polarized light filter, to display the final image. Types of TFT Displays Not all TFT LCD screens are made the same. There are several types of TFT panels, made by distinct methods, and with different performance characteristics. The two most common types are TN and IPS. Twisted Nematic (TN) panels contain liquid crystals that twist as an electric current is applied. As the crystals twist, they allow varying amounts of polarized light to pass through. TN panels are the easiest type of TFT LCD to produce and offer the quickest response times. However, TN panels don't display colors as accurately as other types of panels, particularly when viewed at an angle. In-Plane Switching (IPS) panels contain liquid crystals that don't twist but instead reorient themselves while remaining parallel to their original planar position. As the crystals realign, they allow varying amounts of polarized light through. IPS panels are more expensive to produce than TN panels, require more power to operate, and have slower response times. However, they are more accurate at color reproduction, even when viewed at an angle. Updated November 22, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tft Copy

TFTP

TFTP Stands for "Trivial File Transfer Protocol." TFTP is a file transfer protocol similar to FTP, but is much more limited. Unlike FTP, TFTP does not support authentication and cannot change directories or list directory contents. Therefore, it is most often used to transfer individual files over a local network. TFTP may also be used to boot a computer system from a network-connected storage device. While FTP connections use the TCP protocol, TFTP connections are made over UDP, which requires less overhead than TCP. This means TFTP file transfers may be faster, but less reliable than FTP transfers. Port 20 is used for FTP transfers, while port 69 is used for transferring files via TFTP. TFTP is most often used on Unix systems, but it is supported by Windows and Mac OS X as well. You can initiate a TFTP file transfer via a command-line interface using the following syntax: tftp [-i] [Host] [{get | put}] [Source] [Destination] The "-i" parameter is used to transfer files in binary mode and should be omitted when transferring an ASCII text file. The get command is used to retrieve a file, while the put command is used to send a file to another system. Updated May 2, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tftp Copy

Thermistor

Thermistor A thermistor (short for "thermal resistor") is a type of resistor that is used to measure temperature. While typical resistors are designed to maintain consistent resistance regardless of temperature, a thermistor's resistance varies significantly as the temperature changes. Once a thermistor is calibrated, changes in electrical resistance can be accurately translated into changes in temperature. Thermistors are commonly used in computers to monitor the ambient temperature of internal components. For example, thermistors may be used to record the temperature near the CPU, RAM slots, and the power supply. These thermistors are usually integrated into the computer's motherboard. The actual temperature of components such as the processor and memory modules is typically measured by a diode that is integrated into the chip. Computers use the information recorded by thermistors to prevent overheating. For example, if a processor is running near capacity for an extended period of time, the temperature may gradually increase. When this happens, the computer might speed up the internal fans to increase airflow and cool the computer. In extreme circumstances, such as when a laptop is used outside on a hot day, the fans may not be able to keep the computer at a safe temperature. If the thermistors record a dangerously high temperature, the computer may shut down to avoid overheating and damaging the hardware. Updated June 24, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thermistor Copy

Thick Client

Thick Client Thick clients, also called heavy clients, are full-featured computers that are connected to a network. Unlike thin clients, which lack hard drives and other features, thick clients are functional whether they are connected to a network or not. While a thick client is fully functional without a network connection, it is only a "client" when it is connected to a server. The server may provide the thick client with programs and files that are not stored on the local machine's hard drive. It is not uncommon for workplaces to provide thick clients to their employees. This enables them to access files on a local server or use the computers offline. When a thick client is disconnected from the network, it is often referred to as a workstation. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thickclient Copy

Thin Client

Thin Client In the 1950s, minimalism emerged as an popular art movement. In the 1990s, minimalism emerged again as a popular computer trend. As computer networking became more commonplace, minimalist computers became more common as well. In fact, these trimmed-down machines, often referred to as thin clients, are still popular today. Thin clients function as regular PCs, but lack hard drives and typically do not have extra I/O ports or other unnecessary features. Since they do not have hard drives, thin clients do not have any software installed on them. Instead, they run programs and access data from a server. For this reason, thin clients must have a network connection and are sometimes referred to as "network computers" or "NCs." Thin clients can be a cost-effective solution for businesses or organizations that need several computers that all do the same thing. For example, students in a classroom could all run the same program from a server, each using his own thin client machine. Because the server provides the software to each computer on the network, it is not necessary for each NC to have a hard drive. Thin clients also make it easier to manage computer networks since software issues need to be managed only on the server instead of on each machine. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thinclient Copy

Third Party

Third Party In the computer world, a third party may refer to either a hardware manufacturer or a software developer. It is a label given to companies that produce hardware or software for another company's product. Third party hardware refers to components that are developed by companies besides the original computer manufacturer. For example, a person may buy a Dell computer and then upgrade it using third party components, such as an Nvidia video card and a Seagate hard drive. Since the components are not included with the computer and are purchased from companies other than Dell, they are considered third party hardware. These components would typically not supported by Dell, but instead would be supported by the original equipment manufacturer, or OEM. Third party software refers to programs that are developed by companies other than the company that developed the computer's operating system. Therefore, any Macintosh applications that are not developed by Apple are considered third party applications. Likewise, any Windows programs developed by companies other than Microsoft are called third party programs. Since most programs are developed by companies other than Apple and Microsoft, third party applications make up the majority of software programs. Some programs also support third party plug-ins, which add functionality to the software. For example, Adobe Photoshop supports plug-ins that add features like extra filters and selection tools to the program. These plug-ins may be created and distributed by other companies, but are designed to work with Adobe Photoshop. Therefore, they are called third party plug-ins. Updated August 12, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thirdparty Copy

Thread

Thread "Thread" has several possible definitions in the context of computing. It may refer to process threads, which are the basic unit of instruction that a computer's processor executes. It may refer to a message thread in emails, social media posts, or instant messages. It may also refer to a wireless protocol used by smart home devices to communicate with each other. Process Threads A thread is the basic unit of execution in a computer process. Each process thread includes instructions for a computer's processor to execute in a sequence. When a processor finishes executing one thread, it moves on to the next based on the priority assigned by the operating system. Most modern operating systems support multithreading, which executes multiple threads at once. A program's tasks can be divided into multiple threads that can run in parallel, allowing it to take advantage of multiple processor cores and accomplish tasks faster than if it ran in single-threaded. Processes on macOS sorted by how many threads they contain Message Threads A message thread groups messages into a conversation by topic instead of purely chronologically. For example, a user on a web forum can create a post, and all replies to that post are part of its thread. Similarly, an email thread starts with an initial message and includes every back-and-forth response, allowing an email client to display the entire conversation chronologically. In some cases, you can even collapse a message thread to hide a message's replies, saving screen space and allowing you to focus on another topic instead. Smarthome Devices Thread is a low-power wireless mesh networking protocol for smart home devices like smart light bulbs, wall plugs, and sensors. The Thread protocol includes IPv6, which allows each device to have a globally-unique IP address. It operates using the same IEEE 802.15.4 wireless radio standard used by older Zigbee smart home devices but operates at a different frequency. Since Thread uses mesh networking, each device acts as a router that can communicate directly with other devices in the home. If one device in a Thread network drops offline, the other devices can adjust their routing accordingly to prevent an interruption. A Thread network does not need a central hub or bridge for devices to talk to each other, but a border router is required to control Thread devices over the Internet. Thread devices that plug directly into power — for example, smart speakers like the Apple HomePod or Google Home — typically include border routers. Updated July 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/thread Copy

Throughput

Throughput Throughput refers to how much data can be transferred from one location to another in a given amount of time. It is used to measure the performance of hard drives and RAM, as well as Internet and network connections. For example, a hard drive that has a maximum transfer rate of 100 Mbps has twice the throughput of a drive that can only transfer data at 50 Mbps. Similarly, a 54 Mbps wireless connection has roughly 5 times as much throughput as a 11 Mbps connection. However, the actual data transfer speed may be limited by other factors such as the Internet connection speed and other network traffic. Therefore, it is good to remember that the maximum throughput of a device or network may be significantly higher than the actual throughput achieved in everyday use. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/throughput Copy

Thumbnail

Thumbnail A thumbnail image is a small image that represents a larger one. Thumbnails are often used to provide snapshots of several images in a single space. They are commonly used by digital photo organization programs as well as visual search engines. The term "thumbnail" was originally used to describe physical images or drawings that were miniature in size (roughly the size of a human thumbnail). However, it is now widely used to describe digital images, which are displayed on a screen. Digital thumbnails are usually between 75x75 and 200x200 pixels in size. They can also have a rectangular aspect ratio, such as 150x100 pixels. Since digital thumbnails represent a larger version of the same image, they also usually serve as a link to the larger image. For example, clicking a thumbnail on a Google Images search results page will open the page that includes the full size image. Similarly, double-clicking a thumbnail image in a photo browser will usually display the full size version. Thumbnail images may also be referred to as "thumbs." Updated June 4, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thumbnail Copy

Thunderbolt

Thunderbolt Thunderbolt is a high-speed I/O interface developed by Intel and Apple. It is based on the PCI Express and DisplayPort technologies and supports both data devices and displays. The first two generations of Thunderbolt use the Mini DisplayPort connector, while the third and fourth generations use a USB-C connector. Since Thunderbolt is based on the PCI Express architecture, external devices connected via Thunderbolt can achieve performance that was previously only possible from internal components. The first-generation Thunderbolt interface offered 10 Gpbs of throughput in both directions, later improving to 40 Gbps in third- and fourth-generation connections — 8 times the bandwidth of USB 3.0 and 4 times that of USB 3.1. Like USB and FireWire, Thunderbolt can provide power to connected peripheral devices — up to 10 watts for first- and second-generation connections or up to 100 watts for third- and fourth-generation Thunderbolt. That means external devices, like monitors and external storage devices, can draw their power directly from the Thunderbolt port. Additionally, simple adapters can connect USB, FireWire, and Ethernet devices to a Thunderbolt port. The first and second generations use a Mini DisplayPort connector, while the third and fourth generations use a USB-C connector While Thunderbolt is primarily used as a high-speed data interface, it can also be used to connect to high-resolution displays. The first- and second-generation Thunderbolt interface is physically identical to the Mini-DisplayPort interface and therefore can be used to connect a DisplayPort monitor. The third- and fourth-generation Thunderbolt interface sends DisplayPort data over a USB-C connector instead, which is less common but supported by some monitors. Like HDMI, DisplayPort supports both audio and video, eliminating the need for a separate audio cable. Thunderbolt devices can be daisy-chained, meaning multiple devices can connect in sequence to a single Thunderbolt port. For example, you can connect a Thunderbolt display to a computer and a Thunderbolt external hard drive to the display. You could also connect a second Thunderbolt monitor to the first display. This means you can connect two external displays to a laptop, as long as the laptop supports the resolution required for two screens. Updated December 29, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/thunderbolt Copy

Thyristor

Thyristor A thyristor is a four-layer semiconductor that is often used for handling large amounts of power. While a thyristor can be turned on or off, it can also regulate power using something called phase angle control. This allows the amount of power output to be controlled by adjusting the angle of the current input. An example of this is a dimmer switch for a lamp. While thyristors have the advantage of using phase angle control and handling large amounts of power, they are not as suitable for low power applications. This is because they can only be turned off by switching the direction of the current. For this reason, a thyristor may take longer to turn on or off that other semiconductors. Also, thyristors can only conduct in one direction, making them impractical for applications that require current to be conducted to and from each device. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/thyristor Copy

TIFF

TIFF Stands for "Tagged Image File Format." TIFF is an image file format for storing raster graphics. It is a versatile format that supports advanced features like layers, transparency, and multi-image files, although not all image programs support every feature. TIFF files can be high quality but are often much larger than other image types. The TIFF format uses tags to store metadata for the image or images a file contains. These tags give the TIFF format its versatility — tags in a file's header define the image size, color depth, compression settings, and other information about each image in the file. Since TIFF files may contain multiple images, this allows each image in a file to have its own properties separate from the rest. TIFF files may also include private tags for special-purpose data, like GPS coordinates for high-resolution GIS imagery. Since the TIFF format has been in use since the 1980s, it's one of the most well-supported and widely-used image formats. It is a standard format used in the publishing, photography, and graphic design industries for storing and sharing high-quality images. Long-term image archival often uses TIFF files because the built-in lossless compression minimizes file size while perfectly preserving image data. TIFF Extensions Not every program that can open a TIFF file supports every feature. All programs that open and edit TIFF files must include a basic set of features called Baseline TIFF. These features include 1- to 24-bit color for grayscale, indexed color, or full RGB. They also include multiple subfiles within a single TIFF file and basic lossless compression. Optional TIFF Extensions add more features. These include additional color spaces like CMYK, YCbCr, and Lab. Extensions also support more compression methods like lossless LZW and lossy JPEG, and an alpha channel to add transparency to an image. TIFF files saved in one program might not display correctly in another program that supports fewer features. NOTE: TIFF files may use both the .TIF and .TIFF file extensions. Updated March 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tiff Copy

Tiger

Tiger Mac OS X 10.4, known as Tiger, was released on April 29, 2005, as the fifth major version of Apple's Mac OS X operating system. It followed Mac OS X 10.3 Panther and remained Apple's desktop OS for over two years until the company released Mac OS X 10.5 Leopard in October 2007. Tiger introduced over 200 improvements over its predecessor, Panther. Some of the most significant new features included: Spotlight – a powerful system-wide search tool that allowed users to quickly find files, emails, contacts, and other data stored in the system Dashboard – a new space or "Dashboard" for lightweight apps called widgets, that provided quick access to weather, stock updates, a calculator, and other tools Automator – a user-friendly automation tool that let users create workflows to streamline repetitive tasks without needing to write code Safari RSS – an updated version of Safari with built-in RSS feed support for streamlined web content consumption Core Image & Core Video – new graphics and video processing frameworks that enhanced visual effects and improved graphic processing performance Improved 64-bit Support – expanded 64-bit application support, enhancing performance on newer Mac hardware Tiger continued Apple's tradition of naming releases after big cats, including Jaguar (10.2), Puma (10.1), and Cheetah (10.0). While Apple initially used these names internally, they became an official part of the branding with Jaguar (10.2). Mac OS X 10.4 Tiger was known for its stability, speed, and advanced features, making it a compelling alternative to Windows XP at the time. Version 10.4.4 was also the first version of Mac OS X to support Intel-based Macs, marking a significant shift in Apple's hardware strategy. Updated February 12, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tiger Copy

Timestamp

Timestamp A timestamp is a specific date and time "stamped" on a digital record or file. While most often used as a noun, the word "timestamp" can also be a verb. For example, "The tweet was timestamped on January 8, 2021, at 10:44 AM." Timestamps are the standard way to store dates and times on computer systems. For example, operating systems timestamp each file with a created and modified date and time. Digital cameras timestamp each captured photo. Social media platforms store a timestamp with each post, such as the Twitter example above. While timestamps are universal, there is no universal timestamp format. For example, a programming language may use one method, while a database may use another. Even operating systems have different ways of storing timestamps. For instance, Windows uses the ANSI standard and stores timestamps as the number of seconds since January 1, 1601. Unix stores timestamps as the number of seconds that have elapsed since midnight on January 1, 1970. Because several different timestamp formats exist, most modern programming languages have built-in timestamp conversion functions. A Unix timestamp is also known as "epoch time," which is equivalent to the number of seconds since January 1, 1970, at 12:00 AM GMT (Greenwich Mean Time). This GMT (or "UTC") date/time may be displayed by default on Unix and Linux devices (including Android) when a valid timestamp is not available. In the western hemisphere, such as the United States, this date will appear as December 31, 1969. Storing a timestamp as an integer is efficient since it requires minimal storage space. However, the number must be converted to a legible time format when displayed. MySQL has a TIMESTAMP data type, which conveniently stores timestamps in the following format: YYYY-MM-DD HH:MM:SS MySQL stores timestamps in UTC, which is based in England. So January 16, 2021 at 10:15:30 AM US Central Time would be stored in a MySQL database as follows: 2021-01-16 16:15:30 If converted to a timestamp in Linux, this time/date would be represented as: 1610813730 Timestamps also have different resolutions or specificity. In some cases, seconds are sufficient, while in others, milliseconds or even nanoseconds are required. The Linux timestamp above would be 1610813730000 in milliseconds, which provides a resolution of one-thousandth of a second. Computing operations may require timestamps with even higher resolution. PHP includes a microtime() function that outputs a timestamp in microseconds, with a resolution of one-millionth of a second. Updated January 16, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/timestamp Copy

Title Bar

Title Bar A title bar is a small strip that extends across the top of a window. It displays the title of the window and typically includes the close, minimize, and maximize buttons. In macOS, these buttons are on the left side of the title bar, while in Windows, they are on the right. Some title bars contain tabs, while others have tabs below them. Both Mac and Windows allow you to move a window by clicking and dragging the title bar. In Windows, you can double-click the title bar to maximize the window or restore it to the previous size. In macOS, double-clicking the title bar minimizes the window and puts it in the Dock. If you have several windows open, you can identify each window by simply looking at the title bar. In Windows, you can hover the cursor over an icon in the Task Bar to reveal the titles of all open windows. In macOS, you can view window titles in the "Window" menu of an open application. You can also right click or click and hold an icon in the Dock to display a list of all open windows for that application. When you open a window on the desktop, the title bar displays the name of the current folder. When you open a window in an application, the title bar typically displays the name of the current file. If you haven't saved the file yet, the title bar may display "Untitled" or something similar. When you save the document, the title will change to the name of the document. Updated December 6, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/title_bar Copy

TLS

TLS Stands for "Transport Layer Security." TLS is a data encryption technology that provides secure data transfers. It encrypts (or scrambles) all data sent from one system to another. Any third party that attempts to "eavesdrop" on the transfer will be unable to recognize the data. TLS can encrypt data transfers over any network, from a small local area network to the Internet. Secure websites, for example, use TLS to deliver website content over HTTPS. Email protocols, such as IMAP and SMTP, also support TLS. Secure protocols typically require a different port number than their non-secure counterparts. Below are the standard non-secure and secure (TLS) ports for web and email connections: HTTP: port 80 HTTPS: port 443 IMAP (standard): port 143 IMAP (secure): port 993 SMTP (standard): port 25 SMTP (secure): port 587 TLS vs SSL TLS is the successor to SSL, or Secure Sockets Layer. It was introduced in 1999 as a more secure means of encrypting data transfers. TLS 1.0 and 1.1 (introduced in 2006) were backward-compatible with SSL. While this simplified the transition process, it also compromised security, since it allowed systems to use the less-secure SSL option. In 2008, TLS 1.2 eliminated backward-compatibility with SSL. It also replaced the MD5-SHA-1 encryption algorithm with stronger SHA-256 encryption. TLS 1.3, introduced in 2018, added several additional security improvements. Common web browsers, such as Chrome, Safari, Edge, and Firefox, deprecated TLS 1.1 and earlier in 2018. Today, most web servers and mail servers require TLS 1.2 or 1.3. NOTE: As of 2020, "SSL" is still an acceptable way to refer to secure connections, even if they use TLS. For example, many network and server admins say "SSL" when talking about secure connections that use TLS. Additionally, secure certificates are still called "SSL certificates," even though most operate over TLS. Updated November 14, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tls Copy

Toggle Key

Toggle Key A toggle key toggles the input from a group of keys on a keyboard between two different input modes. The most common toggle key is Caps Lock, which toggles the letter keys between lowercase and uppercase mode. Some keyboards also have other toggle keys, such as Num Lock, Scroll Lock, and Insert. Below is a list of toggle keys and their functions. Caps Lock - capitalizes the input of all letter keys when turned on; may allow lowercase letters to be entered using the Shift key on some keyboards; does not affect the number keys. Num Lock - ensures numbers are input from the numeric keypad rather than arrows or other commands; typically turned on by default; may also be used to change letter keys to numbers on laptop keyboards. Scroll Lock - causes the arrow keys to scroll through the contents of a window when turned on; allows users to scroll using the arrow keys rather then clicking on the scroll bar at the right side or bottom of a window; not supported by most modern operating systems. Insert - toggles between "insert mode" and "overtype mode" when entering text; insert mode is the default mode, which inserts characters wherever the cursor is located, while overtype mode overwrites characters as the user types; also not supported by most modern operating systems. Unlike modifer keys, toggle keys are switched on or off each time they are pressed. Therefore, toggle keys do not need to be held down when pressing other keys. Some keyboards have lights on or near the toggle keys to let the user know if they are turned on or off. Some modern operating systems also display the status of toggle keys on the user's screen. For example, the software installed with Logitech keyboards on Macintosh computers displays a Caps Lock icon in the menu bar whenever Caps Lock is turned on. Updated April 3, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/togglekey Copy

Token

Token Besides those small shiny coins that allow you to play video games, there are three different types of tokens: 1. In networking, a token is a series of bits that circulate on a token-ring network. When one of the systems on the network has the "token," it can send information to the other computers. Since there is only one token for each token-ring network, only one computer can send data at a time. 2. In programming, a token is a single element of a programming language. There are five categories of tokens: 1) constants, 2) identifiers, 3) operators, 4) separators, and 5) reserved words. For example, the reserved words "new" and "function" are tokens of the JavaScript language. Operators, such as +, -, *, and /, are also tokens of nearly all programming languages. 3. In security systems, a hard token is small card that displays an identification code used to log into a network. When the card user enters the correct password, the card will display the current ID needed to log into the network. This adds an extra level of protection to the network because the IDs change every few minutes. Security tokens also come in software versions, called soft tokens. Updated April 9, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/token Copy

Toolbar

Toolbar A toolbar is a set of icons or buttons that are part of a software program's interface or an open window. When it is part of a program's interface, the toolbar typically sits directly under the menu bar. For example, Adobe Photoshop includes a toolbar that allows you to adjust settings for each selected tool. If the paintbrush is selected, the toolbar will provide options to change the brush size, opacity, and flow. Microsoft Word has a toolbar with icons that allow you to open, save, and print documents, as well as change the font, text size, and style of the text. Like many programs, the Word toolbar can be customized by adding or deleting options. It can even be moved to different parts of the screen. The toolbar can also reside within an open window. For example, Web browsers, such as Internet Explorer, include a toolbar in each open window. These toolbars have items such as Back and Forward buttons, a Home button, and an address field. Some browsers allow you to customize the items in toolbar by right-clicking within the toolbar and choosing "Customize..." or selecting "Customize Toolbar" from the browser preferences. Open windows on the desktop may have toolbars as well. For example, in Mac OS X, each window has Back and Forward buttons, View Options, a Get Info button, and a New Folder button. You can customize the Mac OS X window toolbars as well. Toolbars serve as an always-available, easy-to-use interface for performing common functions. So if you haven't made use of your programs' toolbar options or customization features in the past, now is a good time to start! Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/toolbar Copy

Toolchain

Toolchain A toolchain may refer to 1) a specific set of software development tools, or 2) a DevOps (development and operations) process used to test and deliver a software program. 1. Software Toolchain A software toolchain is a collection of tools used for building and delivering an application. These tools are chained together to streamline the software production process. For example, the output generated by one tool in the chain is used as input by the next tool. Developers may use a toolchain near the end of the development process. For example, a development team may build an app within an integrated development environment. Once the source code is complete, a toolchain may be used to generate the executable file. A software development toolchain may include the following components: Assembler - converts assembly language into machine code Linker - merges multiple files into a single program Compiler - generates executable code from a program's source code Library - a collection of code, such as an API, that allows the app to reference prebuilt functions or other resources Debugger - an optional tool that can help fix bugs during the final build steps A developer may create a script that chains these tools together. The resulting toolchain simplifies the process of creating an executable program from existing code. 2. DevOps Toolchain A DevOps toolchain is a list of steps that development and operations teams can follow when releasing a software program. It covers the entire development process, from planning a software application to the maintenance of a program after it has been released. Steps in a DevOps toolchain may include: Plan - define the purpose, requirements, and expectations Create - design and build (program) the software Test - test the software internally on multiple devices; provide a public beta test Release - schedule and deploy the software Monitor - check software metrics, respond to user feedback, update software to fix bugs or add features Updated November 17, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/toolchain Copy

Tooltip

Tooltip As computer users, we have become accustomed to icons that represent files, folders, programs, and other objects on the computer. Many software programs also use icons to represent tools, which are often found in the program's toolbar. While these icons can save screen space and make the program's interface more attractive, it can sometimes be difficult to tell what all the tool icons mean. While some tool icons are obvious (such as a printer icon to print and a scissors icon to cut a text selection), others are a bit more ambiguous. For this reason, programs often include tooltips that explain what each tool icon represents. Tooltips are displayed when you roll over an icon with the cursor. It may take a second or two to display the tooltip, but when it does appear, it usually is a small box with a yellow background explaining what the icon represents. For example, in Microsoft Word, when you roll over the disk icon, the tooltip "Save" appears. This means clicking on the disk icon will save your document. In Photoshop, when you roll over the wand icon, the text "Magic Wand Tool (W)" appears. This indicates that clicking the the wand icon or pressing the W key will activate the magic wand selection tool. Not all programs incorporate tooltips, but most modern programs include them as part of a user-friendly interface. Operating systems also support them in different ways. For example, Mac OS X will show the full text of a long filename when you place the cursor over the filename. Windows includes tooltips for the systray icons and also tells you information about each file and folder you place the cursor over. If you drag your cursor over different icons on your computer, you may find tooltips you never knew were there! Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tooltip Copy

Torrent

Torrent A torrent is a file that contains metadata for files distributed using the BitTorrent protocol. Torrents can share files of all types over a decentralized P2P network without a dedicated file server. A torrent does not include the files themselves; instead, it provides instructions for a BitTorrent client application, telling it what files to download and how to connect to other computers to download them. A torrent file contains several pieces of metadata. Each torrent includes the address for the tracker, which is the server that facilitates connections between computers. A tracker connects your BitTorrent client with peers (who are currently downloading the torrent's files) and seeds (who have finished downloading but are still sharing). A torrent's peers and seeds are collectively known as a swarm. Each torrent also includes the names, locations, and sizes of the files included in the torrent, as well as cryptographic hashes that ensure the integrity of each file. Once a BitTorrent client connects to at least one peer or seed in the swarm, it downloads the torrent's contents piece by piece. An active torrent containing a single disk image file Sharing a file by torrent does not require a dedicated file server as long as at least one seed is available. If there is a dedicated file server, distributing files using torrents allows it to use less bandwidth while hosting files. Instead of sending a full copy of a file to every computer that requests it, any client currently downloading can also help upload the pieces it already has. This is particularly helpful if a popular file is being downloaded by many users simultaneously, reducing the burden on the file server by crowd-sourcing upload bandwidth across the entire swarm. File Extension: .TORRENT Updated May 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/torrent Copy

Toslink

Toslink Toslink is a type of digital audio connection developed by Toshiba Corporation. It uses a fiber optic cable to transmit an audio signal in the form of pulses of light. A single Toslink cable can be used to carry a mono, stereo, or even a surround audio signal. Toslink is similar to the Sony/Philips Digital Interfance, known as S/PDIF. It provides the same digital audio data as S/PDIF, but uses a light beam instead of an electrical current to send the data. Because the Toslink cable does not use electrical currents, the connection is immune to electrical or magnetic interference. Toslink connections are most commonly found on high-end home theater receivers, MiniDisc players, and professional audio equipment, as well as Power Mac G5 computers. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/toslink Copy

Touchpad

Touchpad A touchpad or "trackpad" is a flat control surface used to move the cursor and perform other functions on a computer. Touchpads are commonly found on laptops and replace the functionality of a mouse. A touchpad is designed to be controlled with your finger. By sliding your fingertip along the surface, you can move the cursor on the screen. Similar to a mouse, touchpads can detect acceleration as well as linear motion. This allows you to have refined control with slow movements and quickly move the cursor across the screen using a fast motion. Some touchpads have two buttons below them, which correspond to the left-click and right-click mouse buttons respectively. In modern laptops, the buttons may be hidden in the bottom portion of the touchpad. This allows for a larger touchpad surface, but you can press the lower left or lower right section of the touchpad to click the buttons. Touchpads may also include multi-touch technology, which is common on touchscreen devices. This means you can use multiple fingers to perform different actions on your computer. For example, some programs allow you to use two fingers to pinch and zoom in or out on a document or image. You may also be able to twist two fingers on a touchpad to rotate an image left or right. Some programs allow you to swipe left or right to go back in a web browser or jump to another page in a document. NOTE: There is no difference between "touchpad" and "trackpad," so the two terms can be used simultaneously. However, "touchpad" is commonly associated with Windows computers while "trackpad" typically describes the touch controls built into Macs and Apple-branded peripheral devices. Updated February 5, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/touchpad Copy

Touchscreen

Touchscreen A touchscreen is a display that also serves as an input device. Some touchscreens require a proprietary pen for input, though most modern touchscreens detect human touch. Since touchscreen devices accept input directly through the screen, they do not require external input devices, such as mice and keyboards. This makes touchscreens ideal for computer kiosks, as well as portable devices, such as tablets and smartphones. While a touchscreen may look like an ordinary display, the screen includes several extra layers that detect input. The first layer is a hard protective layer that protects the actual display and the touchscreen components. Beneath the protective layer is an electronic grid that detects input. Most modern touchscreens use capacitive material for this grid, in which the electrical charge changes wherever the screen is touched. Beneath the touchscreen layer is the LCD layer, which is used for the actual display. While early touchscreens could only detect a single point of input at a time, modern touchscreens support "multi-touch" input. This technology, which was made popular by the original iPhone, enables the screen to detect multiple finger motions at once. For example, on some touchscreen devices, you can rotate an image by twisting three fingers in a clockwise or counterclockwise motion. Many touchscreen applications also allow you zoom in and out by spreading two fingers apart or pinching them together. Thanks to multi-touch and other improvements in touchscreen technology, today's touchscreens are easier and more natural to use than they used to be. In fact, improved touchscreen technology has greatly contributed to the popularity of the iPad and other tablet PCs. Updated March 11, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/touchscreen Copy

TPM

TPM Stands for "Trusted Platform Module." TPM is a technology that enables hardware-based security functions. It requires a "crypto-processor," separate from the primary CPU, used exclusively for security purposes. Some functions of a TPM chip include: Providing secure authentication Generating and storing cryptographic keys Encrypting and decrypting data Verifying and recording software loading operations The TPM is a small chip, typically soldered onto a computer's motherboard. It has a unique ID, also called an Endorsement Key (EK), that cannot be changed. Because the key is unalterable and tied to the motherboard, it provides a reliable means of device authentication. However, replacing a motherboard on a TPM-enabled system may require reformatting the startup disk. TPM 2.0 Windows 11 requires TPM 2.0 and a Secure Boot capable PC. These technologies work together to prevent unverified software from loading during the boot process. TPM 2.0 provides several security improvements over the previous standard, including: support for the SHA-256 hashing algorithm support for newer hashing algorithms (TPM 1.2 only supports RSA and the SHA-1) more consistent "lockout policy," defined at an OS-level a single semiconductor package (TPM 1.2 hardware may use discrete components) Most Windows PCs developed after 2015 have TPM 2.0 chips, which require UEFI firmware. If TPM 2.0 is not enabled by default, it may be possible to enable it in the UEFI interface. Updated December 16, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tpm Copy

Traceroute

Traceroute Traceroute is a TCP/IP diagnostic utility that traces the route traveled by a data packet from one host to another. A traceroute report provides information about which networks a data packet traverses and how long each hop takes. If one node along a packet's journey causes a delay or other network problem, a traceroute can help identify it. Since the Internet is a worldwide network of smaller networks, a data packet makes a lot of intermediate steps as it travels from its origin to its destination. It starts within its local network before it travels to an ISP's network, over the Internet backbone, and then across smaller networks until it reaches its destination. In most cases, each hop goes unnoticed by the average user until one of those hops causes a delay or dropped connection. Running a traceroute can identify where a network connection problem occurs and help network administrators resolve the issue. A traceroute of techterms.com in a macOS Terminal window Running a Traceroute Some network utility applications include a traceroute feature with a GUI, but you can also run a traceroute from any computer's command prompt or terminal. Just enter the command tracert [address] (in Windows) or traceroute [address] (in Unix, Linux, and macOS), where [address] is the domain name or IP address of the system you're trying to reach. While the traceroute runs, it lists each node as a separate line and displays its hostname or IP address. It also shows how long each hop takes in milliseconds. Traceroute uses TTL values to get information about intermediary network nodes. First, it sends a set of data packets with a TTL value of 1. These packets reach the first node along the route, which decrements the TTL value to 0 and responds with a TTL exceeded ICMP message. Traceroute displays the node's hostname or IP address and how long it took to respond. Next, it sends a few packets with a TTL value of 2, which reach the second node along the route before expiring and returning a TTL exceeded message. Traceroute continues to send packets with increasing TTL values until they arrive at the destination node, displaying the information for each node that responds. Some lines in a traceroute may show only asterisks, which means that the node received the data packets but could not send a response due to a firewall. Updated December 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/traceroute Copy

Trackback

Trackback Trackback is a means of notifying a website that another website has linked to it. By linking to a trackback link, a webmaster can automatically inform the other website that he has added a link to one of site's pages. However, in order for the trackback system to work, both websites must support trackback. This is because the linking website needs to ping the linked website to let it know a link has been added. Trackback was first implemented by Movable Type blogging and Web design software in 2002. Since then, other Web development tools, such as Wordpress and Typo, have also added trackback support. This allows bloggers from all types of platforms to communicate with each other. For example, a user might read a blog entry on another person's site and decide to post his own blog, responding to the original post. If he adds a trackback link, the original blogger will see that the other user has linked to his blog. Depending on the trackback settings, the new blog may also show up as a link at the end of the original blog. Since trackback requires multiple websites to support the trackback protocol, it continues to be used primarily for blogs. However, some news websites now offer trackback links as well. You can tell if a site supports trackback if there is a "Trackback link" or "Trackback URL" at the end of an article. Sometimes the link will be followed by a number, which indicates the number trackbacks that link back to the blog. Updated April 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trackback Copy

Trackball

Trackball A trackball is an input device used to enter motion data into computers or other electronic devices. It serves the same purpose as a mouse, but is designed with a moveable ball on the top, which can be rolled in any direction. Instead of moving the whole device, you simply roll the moveable ball on top of the trackball unit with your hand to generate motion input. Trackballs designed for computers generally serve as mouse replacements and are primarily used to move the cursor on the screen. Like mice, computer trackball devices also include buttons, which can serve as left-click and right-click buttons, and may also be used to enter other commands. While trackballs are most commonly used with computers, they may also be found in other electronics, such as arcade games, mixing boards, and self-serve kiosks. These devices often have trackballs that are larger than the ones used in computer input devices. Besides the capability to be built into various devices, trackballs have a number of other advantages over mice. Some advantages include the small footprint (since they don't require a mousepad or large area to move the mouse), fingertip control (which may offer more accuracy), and improved ergonomics (since there is less strain on the wrist). Still, many people find trackballs harder to use than mice, since they feel less natural and may require practice to get used to. For this reason, the vast majority of computers include a mouse, rather than a trackball, as the default input device. Updated September 29, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trackball Copy

Tracking

Tracking Font tracking is a typography setting that defines the horizontal distance between each character. Most typefaces (besides cursive ones) have a natural padding on the sides of each character. The tracking setting adjusts this padding to be smaller or larger. A negative tracking setting can be used to fit more text in a specific area without reducing the font size. Negative tracking allows more letters and numbers to be crammed into the same amount of horizontal space. If the tracking setting is too low, however, the text may become difficult to read. If it is reduced enough, the characters may overlap. A positive tracking setting can help make text more readable. It may also be used for logos or other prominent text, since a high tracking setting makes text stand out more. However, if the font tracking is too high, the text can actually become more difficult to read and spaces between words may be difficult to discern. Tracking vs Kerning Kerning is similar to tracking, but refers to spacing between specific letters. Kerning allows certain letters to fit more closely without overlapping. A good example is "AV," in which kerning enables the lower part of the A to fit underneath the top part of the V. Tracking does not consider individual character shapes, but simply adds uniform spacing between each character. NOTE: "Tracking speed" is unrelated to font tracking and describes the rate at which a mouse or trackpad moves the cursor. A high tracking speed will make the cursor move further with less mouse movement or trackpad input. Updated October 18, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tracking Copy

Transistor

Transistor A transistor is a basic electrical component that alters the flow of electrical current. Transistors are the building blocks of integrated circuits, such as computer processors, or CPUs. Modern CPUs contain millions of individual transistors that are microscopic in size. Most transistors include three connection points, or terminals, which can connect to other transistors or electrical components. By modifying the current between the first and second terminals, the current between the second and third terminals is changed. This allows a transistor to act as a switch, which can turn a signal on or off. Since computers operate in binary, and a transistor's "on" or "off" state can represent a 1 or 0, transistors are suitable for performing mathematical calculations. A series of transistors may also be used as a logic gate when performing logical operations. Transistors in computer processors often turn signals on or off. However, transistors can also change the amount of current being sent. For example, an audio amplifier may contain a series of transistors that are used to increase the signal flow. The increased signal generates an amplified sound output. Because of their low cost and high reliability, transistors have mostly replaced vacuum tubes for sound amplification purposes. While early transistors were large enough to hold in your hand, modern transistors are so small they cannot be seen with the naked eye. In fact, CPU transistors, such as those used in Intel's Ivy Bridge processor, are separated by a distance of 22 nanometers. Considering one nanometer is one millionth of a millimeter, that is pretty small. This microscopic size allows chip manufacturers to fit hundreds of millions of transistors into a single processor. Updated October 7, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/transistor Copy

Trash

Trash When you delete a file or folder on a Macintosh computer, it is stored in the Trash. In early versions of the Mac OS, the Trash was located on the desktop, but in Mac OS X, it is found in the Dock. The Trash icon is an empty trash bin when the Trash is empty and changes to a full trash bin when there are items in the Trash. The Trash serves the same purpose as the Windows Recycle Bin. Items can be moved to the Trash by selecting them and dragging them to the Trash icon. You can also choose "Move to Trash" from the Finder's File menu or press Command-Delete after selecting one or more items. You may view items in the Trash by clicking the Trash icon in the Dock. Since the items stored in the Trash have not been permanently deleted, you can drag items out of the Trash if you wish to keep them. Emptying the Trash If you are sure you want to permanently delete the items in the Trash, you can press the "Empty" button in the Trash window or right-click anywhere within the window and select "Empty Trash." You can also empty the trash without even opening the Trash window by selecting "Empty Trash..." from the Finder menu. If you want to overwrite the deleted data so it cannot be recovered even with a data recovery utility, you can select "Secure Empty Trash..." NOTE: If you wish to delete locked items that are stored in the Trash, hold the Option key while selecting "Empty Trash." Updated August 13, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trash Copy

TRIM

TRIM TRIM is a feature supported by modern solid state drives (SSDs) that helps improve drive performance. The word "TRIM" is typically capitalized, though it is not an acronym. Instead, TRIM is a command that the operating system uses to allocate free space on an SSD. When you delete a file on an SSD, the data is often not deleted immediately, but instead is marked for deletion. This prevents certain areas of the flash memory from being written and erased too frequently, which can degrade the SSD's performance over time. However, it also causes many "pages" of free space to be inaccessible, since they are pending deletion. Once enough pages are marked for deletion, the TRIM command performs a "garbage collection" operation. This action deletes large "blocks" of data all at once, allocating free space on the drive. In order for TRIM to work, it must be supported by a device's hardware and software. This includes the 1) SSD, 2) drive controller, and 3) operating system. Nearly all SSDs made in 2012 or later include TRIM support and therefore most SSD controllers manufactured in 2012 or later support TRIM as well. Windows 7 and later and Mac OS X 10.6.8 and later support TRIM at the file system level. However, in some cases it may be necessary to manually configure TRIM, such as when you install a third party SSD. NOTE: In Windows 7, you can check if TRIM is enabled by opening the command prompt and typing the following command: fsutil behavior query DisableDeleteNotify If the result is "0," TRIM is configured correctly. If the result is "1," TRIM is not active. To enable TRIM in Windows 7, you can type the following command: fsutil behavior set DisableDeleteNotify 0 Updated August 7, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trim Copy

Trinitron

Trinitron Trinitron was a pioneering CRT (Cathode Ray Tube) technology developed by Sony Corporation. Unlike conventional CRTs that use a shadow mask, Trinitron utilized an aperture grille. A shadow mask is a metal plate with tiny holes that guide electron beams to specific phosphors on the screen, creating an image. In contrast, an aperture grille consists of vertical wires that allow for a vertically flat screen and produce brighter, more vibrant images with accurate color reproduction. Since Trinitron monitors had vertically flat screens, they had less image distortion and glare than other models. The Trinitron technology quickly became synonymous with high-quality display performance. One of its distinguishing features was the presence of thin horizontal support wires, visible upon close inspection, that stabilized the aperture grille. These wires helped maintain screen integrity, contributing to the display’s superior image quality. Sony's proprietary Trinitron technology allowed the company to lead the television and monitor market for many years. Trinitron displays set a new standard for CRT performance, offering unparalleled clarity and brightness until the widespread adoption of flat-panel display technologies in the 2000s. Now, most televisions and monitors have LCD or OLED screens. Updated July 5, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trinitron Copy

Trojan Horse

Trojan Horse A Trojan Horse is a software program that masquerades as legitimate software but secretly contains a virus or other types of malware. When you run a Trojan horse, its hidden malware is installed and infects your computer. Trojan horses are one of the most common ways for malware to spread, taking advantage of vulnerabilities in human behavior instead of a computer's software. The name comes from the ancient myth of the Trojan Horse. Greek soldiers, unable to penetrate the defenses of the city of Troy during a years-long war, presented the city with a peace offering of a large wooden horse. The Trojans accepted the gift and brought it inside the city walls. Later that night, after the city was asleep, several Greek soldiers emerged from the horse to open the gates and allow the rest of their army inside to conquer the city. Like the ancient wooden horse, a Trojan horse pretends to be something useful — a piece of software you want to run like a game, system utility, or even an antivirus program. However, once you run the program, it turns out to be something different, installing unwanted malware and viruses. Thankfully, there are defenses against Trojan horses. Antivirus software that monitors downloaded files can usually detect Trojan horses, and intercept the malware they contain if you do open them, as long as the virus definitions are kept up-to-date. Even if you have antivirus software installed, the best way to defend against them is to be mindful of what files you open. Be careful, and do not open software or files from unknown or untrusted sources. Updated January 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/trojanhorse Copy

Troll

Troll Trolls are typically thought of as scary creatures that live underneath bridges. While these mythical creatures may only exist in legend, "Internet trolls" are real and cause real problems. In computing, the term "troll" refers to a person who posts offensive, incendiary, or off topic comments online. These comments may appear in Web forums, on Facebook walls, after news articles or blog entries, or in online chat rooms. Trolls post off-color comments for a number or reasons. They may want to stir up an argument or may simply want attention. Some trolls use comment sections after news articles to rant about their feelings, which may or may not be related to the news story. Whatever the case, trolls are seen as a nuisance in online discussions. At the least, their comments distract from an otherwise legitimate discourse. In worse cases, trolls stir up emotions from other visitors, creating unnecessary arguments. The action of posting obscene or inflammatory comments is often called "trolling." This activity is highly discouraged and may actually a violation of online community's user agreement. Fortunately, webmasters and forum moderators typically have the power to remove offensive or off topic posts and even terminate accounts if needed. While we enjoy the liberty of free speech online, it is still important to respect others. Therefore, remember to show common courtesy when posting comments online and don't be a troll! Updated May 3, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/troll Copy

Troubleshooting

Troubleshooting Troubleshooting is the process of diagnosing the source of a problem. It is used to fix problems with hardware, software, and many other products. The basic theory of troubleshooting is that you start with the most general (and often most obvious) possible problems, and then narrow it down to more specific issues. Many product manuals have a "Troubleshooting" section in the back of the manual. This section contains a list of potential problems, which are often phrased in the form of a question. For example, if your your computer's monitor is not producing an image, you may be asked to answer the following troubleshooting questions: Is the monitor plugged in to a power source? Is the monitor turned on? Is the monitor cable plugged into the computer? Is the computer turned on? Is the computer awake from sleep mode? If the answers to all the above questions are Yes, there may be some additional questions such as: Does your computer have a supporting video card? Have you installed the necessary video card drivers? Is the monitor resolution set properly? Typically, each of these questions will be followed by specific advise, whether the answer is Yes or No. Sometimes, this advice is presented as a flowchart diagram. This means each question is followed by a series of other questions, depending on the answer. However, in many cases, only single solutions are provided for each question. Troubleshooting is something we all have to do at some point, though some of us have to troubleshoot product problems more often than others. The good news is that, the more you do it, the more you learn and the better you get at fixing problems. Since many products have similar troubleshooting steps, you may find that after awhile, you don't even need the manual to find solutions to the problems you encounter. Updated August 25, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/troubleshooting Copy

TRS

TRS Stands for "Tip-Ring-Sleeve." TRS is an analog audio plug with three connection points (tip, ring, and sleeve). It comes in two standard sizes: 1/4-inch and 1/8-inch. An 1/8-inch (or 3.5 mm) TRS jack is often called a "headphone jack" since it is the standard audio connector used by headphones. A TRS plug has two lines separating the tip, ring, and sleeve. Tip - Carries the left (L) channel of a stereo audio signal or the positive phase in a balanced mono connection Ring - Carries the right (R) channel of a stereo audio signal or the negative phase in balanced mono Sleeve - Functions as the ground or shield in both stereo and balanced connections TRS vs XLR Both TRS and XLR connections have three connection points and typically carry audio signals. However, TRS can transmit stereo audio (left and right channels) over a single cable, while XLR can only transmit a balanced mono audio signal. XLR connectors have a rugged design and may include a clip to keep the jack in place. Therefore, XLR cables are typically used for live/stage audio and long cable runs. 1/4" TRS cables are typically used for direct connections to instruments like keyboards and guitars. 1/8" TRS audio output ports are most often found on small devices, such as laptops and smartphones. While many laptops and computers still have 1/8" TRS audio output ports, most smartphones now provide digital audio output via USB-C. NOTE: A mono TRS plug is called a TS or "tip-sleeve" jack. TS cables carry a mono audio signal, such as output from an electric guitar. Updated September 20, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/trs Copy

Truncate

Truncate To truncate something is to shorten it, or cut part of it off. In computer science, the term is often used in reference to data types or variables, such as floating point numbers and strings. For example, a function may truncate the decimal portion of a floating point number to make it an integer. If the number 3.875 is truncated, it becomes 3. Note that this is different than if the number had been rounded to the nearest integer, which would be 4. Strings may also be truncated, which can be useful if a string exceeds the maximum character limit for a certain application. Several programming languages use the function trunc() to truncate a variable. PHP uses strlen() to truncate a string to a set limit of characters. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/truncate Copy

TTL

TTL Stands for "Time to Live." TTL can refer to one of several concepts in computer networking. It can refer to the part of the TCP/IP protocol that sets an expiration timer for every data packet sent over a network. It can also refer to how long a server stores cached information before refreshing it. TCP/IP When a data packet is sent from one computer to another over the Internet, it travels by moving from one network to another until it reaches its destination. Each step along the journey is referred to as a "hop." Each data packet traveling along a network adds a small amount of overhead, so packets unable to reach their destination hopping around indefinitely would slow down network performance significantly. To prevent this, every data packet is given a TTL count when sent to prevent it from hopping around forever. A data packet's TTL count is an 8-bit number (between 1 and 255, typically set in the middle between 32 and 128) that specifies the maximum number of hops it can take before expiring and being discarded. Every time that a data packet hops from one router to the next, the TTL count is decreased by 1. If the TTL count reaches 0 before it reaches its destination, it's discarded by the router. Cache TTL can also refer to the amount of time that a server keeps a cache of data for retrieval before refreshing it. For example, a DNS server will have a TTL value for each record, measured in seconds, that controls how long it can serve a record before refreshing it. These TTL entries can be set as low as 30 seconds, but lower-priority records can be set as long as 86,400 seconds (24 hours). A CDN server will have a TTL value for every asset it stores that controls how long it keeps a cached copy before it checks the origin server for changes. This allows the CDN to check frequently-updated files often, while other assets can be cached for days or weeks between updates. Updated September 27, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ttl Copy

Tumblr

Tumblr Tumblr is a microblogging website that allows you to create and follow blogs. Unlike traditional blogging websites, Tumblr encourages short posts, such as a single photo or a few sentences of text. The goal is to make blogging quick and easy, enabling users to post frequent updates. Tumblr is also more community-oriented than other blogging websites, offering a number of social features. For example, you can follow other users, "like" specific posts, and "reblog" updates from other users. The optional "Reply" and "Ask" features allow you to let other users comment on your blogs or ask questions. There is also a Fan Mail feature that allows you to communicate privately with other bloggers. If you want other Tumblr users to interact directly with one of your blogs, you can create a group blog. When you add other users as members, they can post their own updates and reply to posts made by other members. You can also promote members to admins, giving them permissions similar to a web forum moderator. Admins can invite and remove users, delete posts, and view private messages. Like other social media websites, in order to use Tumblr, you first need to create an account. After registering with your email address and choosing a username, Tumblr guides you through creating your first blog and following three others. Once you've completed the initial steps, you can use the Tumblr dashboard to browse or search blogs and post your own updates. Updated March 29, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tumblr Copy

Tunneling

Tunneling Tunneling is the process of transporting network traffic that uses one protocol within data packets using another protocol. It allows two devices to hide the nature of the data within the tunnel from other devices on the network. Tunneling protocols can encrypt traffic, provide a secure connection for a VPN, or allow packets using an unsupported protocol to travel across a network. All data sent over a network travels in data packets. Each packet consists of a small header comprising protocol and address information, followed by a large data payload containing the transferred data itself. Tunneling takes an entire data packet and turns it into the payload of another data packet that uses a different protocol in a process called "encapsulation." The other devices on the network that carry a packet to its destination can only see the header of the outer packet and are unaware of the contents of the encapsulated one. Once it reaches its destination, the receiving device decapsulates the packet and accesses its data payload. An IPv6 data packet encapsulated within an IPv4 data packet The most common use for tunneling is establishing secure VPN connections over the Internet. A VPN uses one of several tunneling protocols, like IPsec, L2TP, or PPTP, to encrypt all traffic between a VPN server and a client. However, there are other reasons that a data transfer may require tunneling. In some cases, the network may not support the original protocol. For example, if an IPv6 packet needs to travel over a network that only supports IPv4, it can be encapsulated within a compatible IPv4 packet and decapsulated once it reaches the other side. Tunneling can also encrypt traffic using a protocol that does not support encryption by encapsulating it within a protocol that does. One example is SFTP, which creates an encrypted SSH tunnel that carries unencrypted FTP traffic. Finally, tunneling can help certain types of traffic bypass a network's firewall by encapsulating restricted traffic (like P2P file sharing) within accepted protocols (like HTTP). Updated December 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/tunneling Copy

Tuple

Tuple A tuple (pronounced "tuh-pull") is a data structure that stores a specific number of elements. These elements may include integers, characters, strings, or other data types. Mathematics The term "tuple" originates from math, rather than computer science. A math tuple may be defined as an "n-tuple," where "n" is the number of values the tuple contains. In mathematical expressions, tuples are represented by comma-delimited lists within parentheses. The following example is a "4-tuple," which has four values. (3, 7, 13, 17) Tuples serve a variety of purposes in mathematics. They are commonly used to represent a set of related values, such as prime numbers or mathematical sequences. A tuple can also display a range of input or output values for a function. Computer Science In computer programming, tuples provide an efficient way to store multiple values. Since they are static and cannot be modified, tuples generally require less memory than arrays. They are also flexible since they can store multiple data types. The following example is a "5-tuple," or a tuple that contains five values. (11, 2020.07, "techterms", "C", -1000) The "pentuple" above contains a positive integer (11), floating point number (2020.07), string (techterms), character (C), and a negative integer (-1000). Tuples are an effective way to store data in computer programs, but not all programming languages support them. For example, Python and Java support tuples, but PHP and Ruby do not. If a language does not support tuples, an immutable array — one that cannot be modified — is the closest alternative. Updated July 11, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tuple Copy

Tutorial

Tutorial A computer tutorial is an interactive software program created as a learning tool. Tutorials help people learn new skills by using a step-by-step process that ensures the user is following along and comprehending the material. For example, a Web development tutorial may begin with instructions on how to create a basic Web page. This page might only include the words "Welcome to my website" on it and use the minimum HTML required in order for the page to load in a Web browser. Once the user is able to create a working Web page, the tutorial may explain how to add other features, such as styled text, table layouts, and images, to the page. Then the tutorial may provide instructions on how to publish the Web page to the Internet. Some software tutorials provide testing features to ensure comprehension of the material, while others may be simple walkthroughs of a software program. Tutorials can be used for both school and business purposes and are written for basic, intermediate, and advanced users. Even smart computer programmers use tutorials. Most software development programs include a tutorial for creating a "Hello World!" program, which is the most basic program that can be created with the software. Since tutorials offer a gradual approach to learning, they can be helpful to people at many different skill levels. If a computer programmer can benefit from a tutorial, just about anybody can. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tutorial Copy

TWAIN

TWAIN TWAIN is a software API and communication protocol that helps applications control scanners, digital cameras, and other image-capturing devices. Any application that includes TWAIN support can interact with any scanner with a TWAIN-compatible device driver. It has been an industry standard since the early 1990s and is supported by most operating systems, image-editing applications, and nearly every scanner and image-capture device on the market. The TWAIN API works through a software library called a TWAIN Data Source Manager (DSM), which is included with most TWAIN device drivers. First, any application with TWAIN support can talk to a DSM to tell it which scanner it wants to use and how to set up the scan. The API includes several parameters you can customize, including image resolution, cropping, color depth, and page orientation. Once you've set everything up and initiated the scan, the app sends those parameters to the DSM, and the DSM passes them on to the scanner. The scanner captures the image and sends it back to the DSM, which processes it and forwards it back to the application that first requested the scan. Photoshop uses TWAIN to import images from scanners and other devices A new extension of TWAIN, called TWAIN Direct, was introduced in 2019. TWAIN Direct eliminates the need for device-specific drivers by establishing direct communication between an application and a device through a network connection, like Wi-Fi or Ethernet. Since it works over a network connection, TWAIN Direct adds support for mobile operating systems like Android and iOS. NOTE: While TWAIN looks like an acronym, it does not actually stand for anything and is capitalized as a stylistic choice. Many people have assigned a backronym, "Technology Without An Interesting Name." Updated August 31, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/twain Copy

Tweak

Tweak When you modify a certain piece of hardware for better performance, it is often referred to as "tweaking" it. Overclocking the computer's CPU or changing jumper settings on the motherboard are common examples of hardware tweaking. Removing system limitations and adding plug-ins or extensions to a computer's operating system are types of software tweaking. Tweaking a computer is much like "tuning" a car (you know, the ones with the huge mufflers, big spoilers, and pimped out rims). It may increase performance, but is best left in the hands of the technically savvy. For example, overclocking your computer's processor may cause it to crash frequently, or worse yet, overheat and destroy the CPU. So, for most people, it is best to leave well enough alone. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tweak Copy

Tweet

Tweet For most of history, "tweet" has been the sound a bird makes. However, with the advent of Twitter, the word "tweet" has taken on a whole new meaning. A tweet is an online posting, or "micro-blog" created by a Twitter user. The purpose of each tweet is to answer the question, "What are you doing?" However, tweets can contain any information you want to post, such as your plans for the weekend, your thoughts about a TV show, or even notes from a lecture. You can publish a tweet using a computer or a mobile phone. Once published, the tweet will appear on the Twitter home pages of all the users that are following you. Likewise, your Twitter home page will display the most recent tweets of the users that you are following. Each tweet is limited to 140 characters or less. This limit makes it possible to show several tweets on one page without certain tweets taking up a lot more space than others. However, it also means that tweets must be brief, so you must choose your words wisely. Of course, there is no limit to how many tweets you can post, so if you really have a lot to say, you can publish several tweets in a row. After all, what better way to spend your time than to let the world know that you are at Starbucks, drinking a Frappuccino and reading the latest issue of TIME magazine. That is important information to share with the world. Updated March 19, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/tweet Copy

Twitch

Twitch Twitch is a live streaming service that enables individuals to stream video content online. It began as a platform for gamers to stream their matches in real-time, but has expanded into other categories. Examples include Just Chatting, Music, and Creative channels. Anyone with a computer and a high-speed Internet connection can stream on Twitch. It is free to stream, and broadcasters can earn money through user subscriptions, advertisements, and donations. Similar to YouTube, popular Twitch streamers earn enough to make Twitch streaming a full-time job. Examples of online games and esports events streamed on Twitch include: Fortnite Minecraft Roblox Counterstrike StarCraft History Twitch began in 2005 as Justin.tv, a small streaming service focused on general "real life" content rather than gaming. However, the most popular type of content was online gaming, which led to the introduction of TwitchTV, a game streaming service, in 2011. The platform gained several million users in the next few years and rebranded as simply "Twitch." In 2014, Amazon acquired Twitch for $970 million. While Amazon now owns the service, it is still run as Twitch Interactive, Inc. and has not changed much since the acquisition. The most notable change is that the streaming platform now offers a broader variety of content, such as the "IRL" (In Real Life) streaming category. Twitch has come full circle in terms of content since Just Chatting is now the most popular streaming category. Updated June 5, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/twitch Copy

Twitter

Twitter Twitter is an online service that allows you to share updates with other users by answering one simple question: "What are you doing?" In order to use Twitter, you must first sign up for a free account. Once you have created your account, you can post your own updates and view the updates others have posted. You can search for people to follow or you can let Twitter select random users. Once you have selected a number of users, their most recent posts, or "tweets," will show up on your Twitter home page. Likewise, your own latest tweets will show up on the home pages of people who have decided to follow you. Twitter limits each tweet to 140 characters, which means there is no room for rambling. Of course, in this era of limited attention spans, 140 characters may be as much as other users want to read anyway. The character limit is also within the 160 character limit of SMS text messages. This is useful, since tweets can be sent to Twitter using mobile phones. To Twitter via your cell phone, you simply need to add your phone number in the "Devices" area of the Twitter Settings page. Since most people have frequent access to a computer or cell phone, Twitter makes it possible to provide others with frequent updates about your life. Many people also use Twitter to blog about the news, politics, TV shows, or any other hot topic. Some people even use it to share their thoughts on lectures or sermons. So Twitter posts are certainly not limited to answering the question, "What are you doing?" Twitter has become the next hot trend in social networking. While it is not as functional as Facebook or MySpace, Twitter's appeal lies within its simplicity. It allows friends, family, and complete strangers to stay connected through quick updates that only take a couple of seconds to write. Therefore, if you like to feel connected to others, but have limited time, Twitter might just be for you. While "Twitter" is a noun, in can also be used as a verb. For example, "He twitters at least five times a day." To learn more about Twitter or to sign up for an account, visit Twitter.com. Updated March 19, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/twitter Copy

Two-Factor Authentication

Two-Factor Authentication Two-factor authentication (2FA) is a security measure that requires two forms of authentication to access an account. It often works in combination with a username and password to add an extra level of security. While a standard login with a username and password provides reasonable security, it can be easily compromised. For example, if someone knows your email address and guesses your password, he or she may be able to access your account. Two-factor authentication adds another layer of security that protects your account even if someone finds out your login information. Below are several types of two-factor authentication: A four to six-digit code sent via text message to the user's mobile phone A one-time code sent via email to the user's email address An additional PIN or passcode required in addition to a username and password A secret question and answer created by the user A physical token, such as a small "key" that displays dynamic code A hardwired dongle linked to the user's account A biometric identifier such as a fingerprint or facial recognition. In many cases, two-factor authentication is only required once per device. After you have successfully logged into a website on a specific device, the site may set a cookie in your browser. Once this cookie is set, your device becomes the secondary authentication for future logins. In order to bypass two-factor authentication each time you log in to a website, you may need to check the "Remember this device" checkbox when logging in. Some websites and online services offer two-factor authentication as an optional security feature, while others require it. 2FA options are typically located in the "Password & Security" settings within a user account. Updated June 5, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/two-factor_authentication Copy

Typecasting

Typecasting Typecasting, or type conversion, is a method of changing an entity from one data type to another. It is used in computer programming to ensure variables are correctly processed by a function. An example of typecasting is converting an integer to a string. This might be done in order to compare two numbers, when one number is saved as a string and the other is an integer. For example, a mail program might compare the first part of a street address with an integer. If the integer "123" is compared with the string "123" the result might be false. If the integer is first converted to a string, then compared with the number in the street address, it will return true. Another common typecast is converting a floating point number to an integer. This might be used to perform calculations more efficiently when the decimal precision is unnecessary. However, it is important to note that when typecasting a floating point number to an integer, many programming languages simply truncate the decimal value. This is demonstrated in the C++ function below. int float_to_int(float a)   // example: a = 2.75 {     int b = (int)a;   // typecast float to int     return b;         // returns 2 } In order to round to the nearest value, adding 0.5 to the floating point number and then typecasting it to an integer will produce an accurate result. For example, in the function below, both 2.75 and 3.25 will get rounded to 3. int round_float_to_int(float a)   // example: a = 2.75 {     int b = (int)(a+0.5);   // typecast float to int after adding 0.5     return b;               // returns 3 } While most high-level programming languages support typecasting, each language uses its own method to convert data. Therefore, it is important to understand how the language converts between data types when typecasting variables. Updated January 7, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/typecasting Copy

Typeface

Typeface A typeface is a set of glyphs — letters, numbers, punctuation, and other characters — of the same design. A single typeface may include many variations in size, weight, width, and slope (or italics). Each of these variations of a typeface is known as a font. Most operating systems have several dozen typefaces pre-installed, giving users plenty of design choices. Word processors and graphic design programs allow the user to use any installed font for text. Changing the typeface alters the look and feel of a block of text, giving it a classic, contemporary, or playful appearance. Some typefaces work better for longer pieces of text, while others are more suited to shorter blocks like titles, headlines, and captions. There are countless typefaces available that a user can install on their computer, which can be managed from a Settings category or Font Book application. Most typefaces are a set of several files, one for each of its fonts. For example, a version of Times New Roman may have one font file each for regular text, italics, bold, and bold italics; a professional version of Helvetica for graphic designers may also have a series of font files for different weights of the typeface, from ultra-thin to extra-bold. Styles of Typefaces Most typefaces are classified into one of several styles based on some important design elements. The five common styles are serif, sans serif, script, monospaced, and display, with serif and sans serif being the most often used. Serif typefaces include serifs on the letter forms. A serif is a small line or detail at the end of a larger stroke, like the feet at the bottom of the letter M. They are often used for body text since the serifs help guide the eye from one letter to the next and are often more legible in small sizes. Times New Roman, Palatino, and Baskerville are three common serif typefaces. Several common serif typefaces Sans-serif typefaces lack serifs. The letter forms are a little more geographic, with less flair than those with serifs. While generally reserved for shorter blocks of text like titles, headlines, and captions, some sans serif typefaces are designed for legibility in body text nearly as well as serif typefaces. Helvetica, Arial, and Verdana are some common sans-serif typefaces. Several common sans-serif typefaces The other classes of typefaces are still frequently used, but not for body text. Script typefaces mimic handwriting or calligraphy. Monospaced typefaces space every glyph equally; since letters will line up neatly in columns, monospaced typefaces are used by developers when writing source code. Display fonts are ideal for headlines and titles, often with design elements that make them difficult to read at small sizes. Updated November 15, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/typeface Copy

U

U A rack unit, abbreviated "U," is a unit of measurement of the size of rack-mounted equipment. It measures how tall a rack-mountable piece of hardware is — 1U is equal to 1¾ inches, or 44.45 mm. A full-size rack cage is often around 42U tall, or just over six feet; half-height cages 18-22U tall are also common. Most rack-mountable equipment is between 1 and 4U tall and 19 inches wide, although some telecommunications equipment uses special 23-inch-wide racks instead. Servers, disk storage arrays, and networking equipment like switches and firewalls are some of the most common rack-mounted devices, although all sorts of hardware can be found in rack-mountable versions. For example, four rack-mounted servers may all connect to a rack-mounted KVM switch to allow them to share a single keyboard, mouse, and monitor; they may also all plug into a single rack-mounted power strip and be kept cool by a rack-mounted cooling fan. An HP ProLiant rack-mounted server that is 1U tall Using rack-mounted hardware saves space by stacking multiple pieces of equipment vertically and helps keep that equipment neat and organized. It simplifies cable management by keeping devices close to each other and locked in place. Since it simplifies organization, rack-mounted hardware is ubiquitous in data centers and other professional IT installations. Updated September 5, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/u Copy

UAT

UAT Stands for "User Acceptance Testing." UAT is a process designed to help ensure products will meet user expectations when they are released. It involves running a product through a series of specific tests that help indicate whether or not the product will meet the needs of its users. While the user acceptance testing process can be applied to any type of product, in the computer industry, it is most often associated with software programs. Software applications typically go through multiple development stages before being released to the general public. These stages include the initial development process, the alpha testing stage, the beta testing stage, the release candidate stage, and the public release of the software. Not every software program goes through each one of these stages, but most software programs are tested by multiple users before they are released. The primary UAT stages of software development include the alpha and beta testing stages. In the alpha testing stage, a limited number of users within (and possibly outside) a company test an early version of software for bugs and user interface issues. In the beta testing stage, a larger group of users (usually outside the company) test the software and provide feedback, including bug reports and usability issues. This feedback lets the software company know if the product is meeting user expectations and enables the development team to make the necessary revisions before the official release. Because software is relatively easy to modify and update (unlike hardware products), user acceptance testing may continue even after a software program is released. Many programs now include feedback and bug submission forms directly within the software, which allow users to provide feedback as they use the software. Updated July 12, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uat Copy

UDDI

UDDI Stands for "Universal Description Discovery and Integration." UDDI is a protocol that allows businesses to promote, use, and share services over the Internet. It is an OASIS Standard, which is supported by several major technology companies. Members include Microsoft, Cisco, Oracle, Avaya, Sun Microsystems, and others. The UDDI protocol serves as a foundational tool that allows businesses to find each other and complete transactions quickly and easily. Companies that use the UDDI protocol can extend their market reach and find new customers while also finding other businesses that offer useful services to them. Because UDDI uses a standard format for describing business services, it is easy to search and find useful services offered from other businesses. Updated March 28, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uddi Copy

UDP

UDP Stands for "User Datagram Protocol." UDP is a data transport protocol within the Internet Protocol (IP) suite. UDP is a connectionless protocol that does not rely on two hosts establishing a connection before transferring datagrams. Streaming media, online gaming, and other situations where low latency is more important than the occasional dropped data will use UDP. Compared to other data transfer protocols like TCP, SSH, and HTTPS, UDP is a simple way to send data. The device sending the datagrams does not need to establish a connection, ask whether each packet has been received, and re-send any data lost in transit. Instead, it sends a series of datagrams containing the destination IP address, routing information, and payload as fast as possible. Skipping the connection and verification steps means that UDP has less overhead and latency than other protocols. However, there is no guarantee that the device receiving the datagrams is getting all of them. If network congestion causes some datagrams to get dropped, there is no way for the receiving device to ask for the data again. In the case of a video call over the Internet, a lower latency means that there is less of a delay between one end of the call saying something and the other end hearing it. If the connection between the two ends slows down and some datagrams are lost, the video may cut out for a moment before resuming. Updated October 5, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/udp Copy

UEFI

UEFI Stands for "Unified Extensible Firmware Interface." UEFI is an interface that loads near the beginning of the boot sequence and provides custom configuration options for a PC. It is considered the successor to BIOS and offers many advantages, including mouse support, a graphical user interface, and support for 32-bit and 64-bit systems. UEFI is a standardized version of the original Extensible Firmware Interface (EFI) specification developed by Intel. It is managed by the Unified Extensible Firmware Interface Forum, comprised of more than 250 member companies. The standardized UEFI specifications are designed to ensure that components from different hardware manufacturers can be recognized and controlled via the UEFI. Accessing the UEFI The UEFI is designed for advanced users, so by default it is not displayed during the startup process. The standard way to load the UEFI on most Windows PCs is to press the F2 key when the computer is starting up. Typically, you should press the key when the manufacturer's logo (such as "Dell" or "HP") appears. In Windows 8 and 10, you can also use the Troubleshoot option to enable "UEFI Firmware Settings" before the next restart. Configuration options provided in the UEFI vary depending on the manufacturer and computer model. Typically, you can perform administrator tasks like formatting and partitioning storage devices. You may also be able to change your boot disk and activate or deactivate specific peripherals. Some systems even allow you to overclock your CPU or run your PC in a low-power "eco-friendly" mode. Warning: Since the UEFI gives you administrative control over low-level settings, it is possible to make changes that may harm your computer. Therefore, use discretion when making any changes within the UEFI. If the computer does not function correctly after starting up (e.g., the fans run too fast), you can restart and use the UEFI to revert your changes. NOTE: Intel-based Macs use EFI to manage the hardware configuration, but the UEFI is not available at startup. Updated December 17, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uefi Copy

UGC

UGC Stands for "User Generated Content." In the early years of the Web, most websites were static, meaning each page had fixed content that did not change unless it was updated by the webmaster. As the Web evolved, dynamic websites, which generate content from a database, became the norm. Now, in the Web 2.0 era, many websites now include UGC, or content created by visitors. Many different types of websites contain user generated content. One example is a Web forum, which allows users to discuss topics by posting comments online. Another example is a wiki, which allows users to directly add and edit website content. Wikipedia, for instance, contains information written and submitted by thousands of authors around the world. Social networking websites like Facebook and LinkedIn, are also UGC websites that allow users to create personal profiles and share information with each other. These websites simply create a platform for users to add and share content with each other. While wikis, Web forums, and social networking websites contain nearly all user generated content, many other sites now contain both original content and UGC. For example, blogs often include a section where visitors can post comments about the author's articles. News websites typically allow visitors to post their feedback below the news stories. Often, these hybrid pages eventually contain more user generated content than original content. Thanks to UGC, the Web is now a more interactive medium, allowing users to actively participate in the creation of website content. Updated January 19, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ugc Copy

UICC

UICC Stands for "Universal Integrated Circuit Card." A UICC is a "smart card" designed to operate with 3G and 4G wireless technologies, including LTE. It can be used for multiple applications, but is commonly used as a SIM card in mobile phones. UICCs have mostly replaced ICCs (Integrated Circuit Cards), which were used with 2G and early 3G systems. A UICC is a tiny card, smaller than a thumbnail, that includes an integrated circuit. This circuit contains a processor, non-volatile memory (NVRAM), ROM, and RAM. Each card has a unique identifier that is used to identify a device on a cellular network. The card may also store data, such as contacts that have been saved by the user. While there is no standard storage capacity for UICCs, they commonly have at least 256 kilobytes of storage and may exceed one gigabyte. UICCs are "universal" since a single card supports multiple applications and therefore multiple cellular networks. Examples include USIM (Universal Subscriber Identity Module) for GSM networks, CSIM (CDMA Subscriber Identity Module) for CDMA networks, and ISIM (IP Multimedia Services Identity Module) for UMTS networks. The universal nature of UICCs allow them to work with various mobile networks around the world. NOTE: In order to use a UICC, the card's unique identifier (the ICC ID) must be registered and activated with a mobile service provider. Removing the card from a smartphone or other cellular device will make the device unrecognizable on the network. In most cases, if you insert the UICC card in another device, it will be automatically be recognized and usable on the network. Updated March 13, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uicc Copy

Ultra DMA

Ultra DMA Stands for "Ultra Direct Memory Access." Ultra DMA, also abbreviated UDMA, is an interface that transfers data between a computer and a storage device (usually a hard drive). It is the fastest version of DMA, replacing Word DMA (or WDMA) and supporting a burst transfer speed of up to 133 MB/s. Like all other versions of DMA, it allows the transfer of data directly from the system memory, bypassing the CPU and removing the associated overhead. Computers use the various DMA modes to transfer data from a memory to a hard drive over the Parallel ATA interface. The faster speeds offered by most UDMA modes require a hard drive to be connected by an 80-conductor ATA cable instead of a 40-conductor one. The extra lines in this cable are all ground lines, separating the data lines to prevent interference. UDMA adds two new features over WDMA to allow faster data transfer rates. First, double transition clocking transfers data twice per clock cycle instead of just once. Second, cyclic redundancy checking (CRC) uses an algorithm to calculate redundant information in a data transfer and includes a CRC code with the data. The receiving device checks the CRC code against the data received to see if any data was corrupted during the transfer, and requests the data be sent again if necessary. If CRC errors occur frequently, the system may switch to a slower UDMA mode to increase reliability. Updated November 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ultradma Copy

Ultrabook

Ultrabook Ultrabook™ is an Intel trademark that describes a laptop specification. Notably, ultrabooks have an Intel CPU, SSD storage, and a thin unibody frame. Ultrabooks are so thin that they do not have an optical drive, such as a DVD player. The initial Ultrabook spec was announced in 2011, which included an Intel "Sandy Bridge" processor, a maximum height of 21mm, 5-hour battery life, a maximum 7-second resume-from-hiberation time, and Intel's Anti-Theft Technology. The 2012 update required a minimum "Ivy Bridge" Intel processor, USB 3.0 or Thunderbolt ports, and a minimum data transfer rate of 80 MBps. The 2013 Ultrabook update increased the minimum processor to a "Haswell" Intel processor and required 6 hours of high-definition video playback. As with many trademarked terms, "Ultrabook" is often used generically. For example, many people refer to thin and lightweight laptops as ultrabooks. However, an Ultrabook only refers to laptops that meet Intel's specifications. Since laptops have gotten thinner and most no longer have optical drives, Ultrabooks are not as exclusive as they once were. However, if you are shopping for a new laptop, it can still be helpful to check if the laptop meets the requirements of an Ultrabook. Updated August 17, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/ultrabook Copy

UML

UML Stands for "Unified Modeling Language." UML is an industry-standard modeling language designed to help visualize how a complex software system will work. System architects, software engineers and developers, and other stakeholders use UML to build diagrams using a standard visual language to lay out and document how different parts of a software system will interact. A set of UML diagrams is similar to a set of blueprints for a house. It shows how to build the house using common symbols that the architect, builder, electrician, plumber, and homeowner can all understand enough to fulfill their role. The UML specification defines 13 types of diagrams in two categories—Structure diagrams, which show the static structure of a system, and Behavior diagrams which show the dynamic interactions between objects in a system. Each of these diagrams shares a set of design elements, with symbols representing certain types of objects, their roles, and how they interact with each other. Objects are linked together, similar to a flow chart, to show relationships and the flow of information. While UML was designed primarily for object-oriented software design, it is also useful when modeling non-software systems. Certain types of UML diagrams can model manufacturing processes, hardware design, or other processes where objects interact. The Object Management Group (OMG) maintains the UML specification, updating it over time as new technology and procedures become common. Updated October 11, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/uml Copy

UNC

UNC Stands for "Universal Naming Convention or Uniform Naming Convention." UNC is a standard format for naming and accessing shared resources on a local area network (LAN). Using it allows an operating system that supports UNC (like Windows) to access files and folders over a network just as easily as on the local computer. UNC can identify many types of devices on the network — computers, printers, scanners, NAS devices, or any other device that can store and share files. A UNC path consists of three elements: the server name, the shared directory, and the filename, separated by backslashes. A UNC can be entered anywhere in an operating system that a local file path would be to access a shared network resource. A typical UNC path appears as follows: \server-nameshared-directoryfilename The server name can be a string that identifies a server set by the server's administrator using DNS or WINS, or it could be the server's IP address. The shared directory is a directory shared on the other device. Each share on the server is separate from the others. The filename is not necessarily only a file name. It can also contain subfolders of the shared directory to display the full file path. Updated November 2, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/unc Copy

Undo

Undo Undo is a command included in most software programs. It is typically located at the top of the Edit menu and has the standard shortcut "Control+Z" in Windows and "Command-Z" in Mac OS X. The Undo command allows you to undo the last action you performed in the program. Many programs also include a "multiple undo" feature, which allows several actions to be undone in series. You can typically reverse an "Undo" command by selecting "Redo" from the Edit menu. The Undo command can undo a wide variety of actions, depending on the program. For example, in a word processor, Undo is often used to delete a recently typed section of text. However, you can also undo a deletion, the pasting of an object, or a change made to the page formatting settings. In an image editing program, you can undo a paintbrush stroke, a selection, the moving of an object, and many other actions. Modern operating systems allow you to undo moving and renaming files. Undo is a convenient feature, which can save a lot of time and frustration for those of us who are not perfect and occasionally make mistakes. Perhaps that's why the shortcut was designed to be so easy to use (Z is right next to the Control and Command keys). So, whenever you make a mistake while using a program, just enter the Undo shortcut and the mistake is gone. If only the Undo command could be used in other areas of life as well. Updated October 26, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/undo Copy

Unfriend

Unfriend When you "friend" a user on a social networking website, you add that person to your list of online friends. When you remove a person from your friend list, you "unfriend" that user. Unfriending may be done for many reasons. For example, if an online friend is contacting you too often or writing on your wall too much, you may want to remove that user from your list of friends. You may also wish to unfriend a user if a relationship with that person has ended. When you unfriend someone, that person no longer has "friend" access to your profile page. This means he or she may no longer be able to view your profile or post comments on your wall. You can customize these options for friends and non-friends in Facebook using the "Privacy Settings" link. Some situations may require a "mass unfriending," which involves removing many people from your friend list at one time. Many MySpace and Facebook users have hundreds or even thousands of friends, though most of these people are not close friends in real life. Since users with "friend" status can view your personal profile data, it might make sense to unfriend users that you don't know. Updated November 21, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/unfriend Copy

Unicast

Unicast A unicast is a real-time data transmission from a single sender to one recipient. Examples include a video stream, audio stream, or online presentation shared with a single user over the Internet. It is considered "one-to-one" communication since there are only two parties involved. Unicasting is often contrasted with multicasting and broadcasting — both of which are "one-to-many" forms of communication. Multicasts send data to a specific set of users, while broadcasts do not have a defined number of recipients. Broadcasting is the most common, while unicasting is the least common. Unicasting Misconceptions A unicast is a type of cast — the real-time distribution of content. However, the term "unicasting" is sometimes used to describe point-to-point data transfers, such as a file download, and bi-directional communication, such as a phone call. These are both forms of one-to-one communication, but they are not types of casting. Updated September 18, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/unicast Copy

Unicode

Unicode Unicode is a universal character encoding standard. It defines the way individual characters are represented in text files, web pages, and other types of documents. Unlike ASCII, which was designed to represent only basic English characters, Unicode was designed to support characters from all languages around the world. The standard ASCII character set only supports 128 characters, while Unicode can support roughly 1,000,000 characters. While ASCII only uses one byte to represent each character, Unicode supports up to 4 bytes for each character. There are several different types of Unicode encodings, though UTF-8 and UTF-16 are the most common. UTF-8 has become the standard character encoding used on the Web and is also the default encoding used by many software programs. While UTF-8 supports up to four bytes per character, it would be inefficient to use four bytes to represent frequently used characters. Therefore, UTF-8 uses only one byte to represent common English characters. European (Latin), Hebrew, and Arabic characters are represented with two bytes, while three bytes are used to Chinese, Japanese, Korean, and other Asian characters. Additional Unicode characters can be represented with four bytes. Updated April 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/unicode Copy

Unified Memory Architecture

Unified Memory Architecture Unified Memory Architecture (abbreviated UMA) is a type of computer memory architecture that uses the same pool of memory for both the CPU and GPU. It is commonly used in computers with integrated graphics processors, as well as mobile devices like smartphones and tablets. Computers with discrete graphics processing units have separate memory banks for the CPU and GPU. The CPU uses the main system RAM to store temporary data for applications and processes running on the computer, while GPUs use their own built-in memory banks of high-speed VRAM for image data. The PCI Express bus transfers data between the system RAM and the VRAM when necessary. CPUs with an integrated GPU will share a single pool of memory between both processors, reducing the cost and complexity of the computer. This architecture eliminates the need to swap data between RAM and VRAM. In some cases, this results in slower performance as the CPU and GPU need to access memory over a bus that would otherwise only service the CPU. Traditional Memory Architecture vs Unified Memory Architecture Apple introduced their implementation of a UMA with their M1 series of processors. Since these processors include the system's memory on the same system-on-a-chip as the CPU and GPU cores, they can access the shared memory pool at much higher bandwidth and with much less latency than other implementations. The CPU and GPU do not need to move data from one cache to another, and the data never has to leave the chip to go to a separate memory module. Updated March 1, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/unified_memory_architecture Copy

Unix

Unix Unix is a family of operating systems often used by high-end workstations and server computers. Unix operating systems use a modular design, with a primary system kernel, a separate user interface shell, and hundreds of small utility programs. Some versions of Unix are proprietary software, while others are free and open source. It is capable of running on many different hardware platforms. Unix pioneered several features that are now widely adopted by the computer industry. C was created to write utility programs for Unix and is now one of the most widely-used programming languages. Unix is a multi-user operating system — multiple users can log in at the same time and run programs simultaneously. It uses a hierarchal file system that allows an arbitrary number of nested subfolders. Unix servers hosted many early Internet services, and versions of it continue to run on most servers now. AT&T developed the original version of Unix at Bell Labs in 1969. They licensed it to businesses, universities, and other organizations in the 1970s and 1980s. Since each licensee could make their own fork of Unix, it ceased to be a single operating system and became a family of similar systems. An industry consortium, The Open Group, currently maintains the Unix trademark and the Single UNIX Specification, ensuring that the numerous Unix versions comply with a set of standards. NOTE: Common variants of Unix include IBM's AIX, Hewlett-Packard's HP-UX, and Oracle's Solaris. Some other operating systems, like FreeBSD and Linux, are considered Unix-like — they are based on Unix and function the same but do not contain any original Unix code. Updated February 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/unix Copy

Unmount

Unmount Unmounting is the software process of deactivating a disk volume to allow it to be safely disconnected. An unmounted storage disk is still physically connected to a computer and powered on, but the computer cannot read or write data to it. It is the opposite of mounting a disk, which activates the disk volume and makes its contents available to the computer's operating system. Disconnecting a disk while the operating system is in the middle of a data transfer may lead to data corruption or loss. This problem can affect hard drives and solid-state drives (both internal and external), as well as USB flash drives, memory cards, and even disk image files; optical discs, meanwhile, are automatically unmounted during ejection. While mounting a disk happens automatically, unmounting requires actively issuing an Eject or Unmount command. The Eject button in the Windows File Explorer unmounts the selected disk To keep your data safe, always unmount a disk before disconnecting it. In Windows, first select or open the disk in File Explorer, then click the Drive Tools tab and click the Eject button. On macOS, click and drag the drive's icon from the desktop to the Trash icon on the dock (which becomes an Eject icon), or click the Eject button next to the disk's name in the Finder sidebar. On Unix and Linux computers, you can use the umount command (note that it is not unmount) to unmount a disk volume. Updated August 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/unmount Copy

UPC

UPC Stands for "Universal Product Code." A UPC is a 12-digit identifier that includes a number and a barcode. It uniquely identifies products sold through retailers. UPCs are used in the United States, though other countries around the world use similar codes. For example, products from European counties use a 13-digit barcode called an EAN or "European Article Number" that is compatible with most UPC scanners. A UPC contains a sequence of black vertical bars that represent the twelve digits printed below the barcode. These bars can be scanned by a laser device or camera that quickly reads and interprets the sequence into the corresponding 12-digit number. When you check out at a retail store, the UPC is used to identify and record each item you purchase. The first digit in a UPC code represents the country of origin (0 for the United States) and the next five digits identify the manufacturer. Digits 7 through 11 identify the specific product. The twelfth digit serves as checksum, which checks the validity of the preceding 11 numbers. UPCs vs Other Product Identifiers SKUs UPCs and SKUs both identify specific products, but UPCs — as the name implies — are more universal. Manufacturers must apply for a range of UPCs, which identify their products globally for distribution by any retailer. SKUs can be generated by any retailer as an alternative to UPCs. They provide retailers with a custom means of identifying products they sell. Product IDs UPCs and product IDs (PIDs) both uniquely identify products, but product IDs are generated by each manufacturer. PIDs allow manufacturers to keep records of their products using their own IDs rather than universal product codes. Many products have both the UPC and product ID printed on the label. Serial Numbers While UPCs identify products, serial numbers (SNs) identify each individual item manufactured. For example, a specific type of wireless bluetooth headphones may have a single UPC, but each item sold will have a unique serial number. The serial number allows manufacturers to keep records of each item they sell. Updated August 6, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/upc Copy

Updated August 19

2013 by Per C.

Updated July 3

2017 by Per C.

Uplink Port

Uplink Port An uplink port is a port on a router or switch designed to connect to another router or switch or an Internet access device. Most home routers include an uplink port for connecting a cable or DSL modem. An uplink port looks like a standard Ethernet port, which is a female jack for an RJ45 connector. However, there are two primary differences between the uplink port and other ports on a router or switch: The uplink port reverses the transmit and receive connectors. The uplink port may support more bandwidth than the other Ethernet ports. Connector Reversal When you connect a desktop computer or laptop to a standard port on a router, the Transmit pins from the Ethernet cable connect directly to the Receive pins inside the port. This is known as a "straight-through" connection since the data simply passes through the router. Similarly, the Transmit pins inside the port connect directly to the Receive pins on the connected Ethernet cable. When attaching a router to a cable modem, switch, or another router, the Receive and Transmit terminals must be altered to function correctly. One option is to use a "crossover cable," which reverses four of the eight wires. Another option is to use an uplink port, which does this automatically. Greater Bandwidth Many routers and switches provide extra bandwidth through the uplink port to avoid data congestion. For example, a router may have four 1 Gbps Ethernet ports and one 2.5 Gbps uplink port. A switch may have sixteen 1 Gbps Ethernet ports and one 10 Gbps uplink port. Since all incoming and outgoing traffic flows through the uplink port, it is important that it provides more bandwidth than the rest of the individual ports. Standard ports rarely use 100% of their maximum bandwidth at the same time, so the uplink port bandwidth is usually less than the combined bandwidth of the rest of the ports. Using the Uplink Port Most modern routers have an uplink port on the back. This port is often labeled "Internet" or has a circle icon that indicates it should connect to an Internet device, such as a cable or DSL modem. On switches, the uplink port is often labeled "Uplink" since it often connects to another switch or router. NOTE: Some routers support "Auto-MDIX," which detects if a connection is straight-through or crossover. You can connect a modem to an Auto-MDIX port without a crossover cable, but it may not provide the extra bandwidth of an uplink port. Updated October 5, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uplink_port Copy

Upload

Upload Uploading is the sending of data from one device to another over a network or other connection. It is the opposite of downloading, which is the act of receiving data. A network connection can upload and download data at the same time. Many home Internet connections upload data at a slower speed than they download. Using the Internet involves a constant mix of uploading and downloading. Most Internet use involves downloading files and web pages, but every interaction with a server involves uploading a tiny bit of data. Other situations like sending an email, sharing a photo on social media, and participating in a video call also involve uploading data. The word "upload" may also refer to the speed at which outgoing data travels over an Internet connection. Some types of Internet connections are asymmetrical, which means they do not upload and download data at the same speed. For example, a cable internet connection may have a download speed of 900 Mbps but an upload speed of only 20 Mbps. Connections that are symmetrical, like fiber networks, can upload and download at the same speeds. Updated October 5, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/upload Copy

UPnP

UPnP Stands for "Universal Plug and Play." The concept of "Plug and Play" describes computer peripherals that work right away when plugged in with no setup needed by the user, and UPnP applies that concept to network devices. UPnP is a set of networking protocols that helps new devices communicate over a local network without manual configuration. Devices can discover each others' presence across the network and automatically establish connections. UPnP works over both wired (ethernet) and wireless (Wi-Fi and Bluetooth) connections. Once a UPnP device connects to the network, it receives an IP address from the router using DHCP and announces its presence to the network. Other UPnP devices see this announcement and respond by exchanging information like manufacturer, device type, and the networking services they support. After this introduction, UPnP devices can listen for event notifications from other devices and send each other commands. Finally, a UPnP may give the network's router port forwarding instructions to help it send and receive data. UPnP AV is a version of UPnP that enables easy setup of multimedia devices on a network. It uses the same protocols as UPnP to help media servers and playback devices find each other with minimal configuration by the user. A media server stores and manages multimedia content and shares it over the network. Televisions, set-top boxes, and other devices connected to the network can browse the media library on the server and stream its music, movies, and TV shows. While UPnP is convenient, it also introduces some security risks. Internet-accessible UPnP devices running outdated or unpatched firmware may be vulnerable to exploits, making them tempting targets for hackers to gain a foothold into a network. UPnP devices may be poorly configured by the manufacturer, using a weak default admin password or opening more ports than necessary. It is wise to keep device software and firmware up-to-date and properly configure your network firewall and router. If you don't need to use UPnP for your network, it may be a good idea to disable it on your router entirely. Updated July 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/upnp Copy

UPS

UPS Stands for "Uninterruptible Power Supply." A UPS is a device that combines a surge protector and a high-capacity rechargeable battery. One can provide power to computers, broadband modems, Wi-Fi routers, and other devices during unexpected power outages. A typical UPS can power a desktop computer and monitor for up to 15 minutes (providing enough time to save files and safely shut down) or a modem and Wi-Fi router for several hours (to provide a wireless Internet connection). It can also provide temporary emergency power for other devices, including medical equipment like CPAP machines. How much power a UPS can provide depends on its volt-amperes (VA) rating, which measures how fast the power inverter can convert battery energy into electric power. The maximum wattage is roughly 60% of the VA rating — a 900VA UPS can provide about 540W of power (give or take a few watts). Before buying a UPS, it's wise to know how much electricity your computer setup uses so that you get a powerful enough UPS. UPS devices come in several form factors. Some look like standard surge protectors but are thicker to accommodate a battery. Others are upright and resemble small desktop computer towers. All UPS devices clearly label which power outlets connect to the battery backup and which are only surge-protected, so you know where to plug in the most important devices. Many also include USB-A and USB-C ports for charging mobile devices during a power outage. Some UPS devices come with power management software to help your computer communicate with the UPS over a USB connection. If installed, this software can monitor the battery level and battery health of a UPS and show an estimate of how much time you have left. You can even configure it to automatically trigger a system shutdown when the battery is running low or notify you of a power outage via email. Updated December 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ups Copy

Uptime

Uptime Uptime describes how long or how reliably a system has been running. It may be defined as an absolute value (e.g., 64 days) or a percentage (e.g., 99.5%). Uptime percentage is a common metric used to determine the reliability of a web server. Absolute uptime measures the length of time a system has been running since it was booted. For example, if a system was rebooted at 1:00 PM and is still running at 3:30 PM the next day, its uptime will be 26.5 hours. When a system is restarted, the uptime number resets. On a Unix-based system (including macOS), you can view the system uptime by typing the uptime command in the terminal. Uptime percentage is determined by dividing the total time has been available (or online) by the time it has been active. For example, if a web server has been active for one year, but was not available for a total of 32 hours during the year, its uptime percentage would be calculated as follows: 365 days x 24 hours = 8,760 active hours 8,760 active hours - 32 hours unavailable = 8,728 available hours 8,728 available hours ÷ 8,760 active hours = 99.6347% uptime An uptime of 99.9% is a reasonable goal for a web server. That percentage may sound high, but even a few hours of downtime can have a significant impact on a business. For this reason, a high-traffic enterprise server may have an uptime goal of 99.999%. This level of uptime can be achieved by using multiple servers for redundancy and load balancing. Updated December 31, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uptime Copy

URI

URI Stands for "Uniform Resource Identifier." A URI is a string of characters that uniquely identifies a file or other resource. It may include a resource's unique name, its location on a network, or both. All URIs follow a standard structure, allowing applications like web browsers and file transfer utilities to locate specific files or resources. The most common type of URI most people encounter is a Uniform Resource Locator (URL), which provides the location of a web page or other file on a network. A URL includes information about where a specific resource is, as well as how an application should interact with it. For example, the URL of this definition is https://techterms.com/definition/uri. Another type of URI, called a Uniform Resource Name (URN), gives a resource a unique name within a specific namespace without providing information about its location. For example, published books are all identified by an ISBN, which can form part of a URN to track them. The URN urn:isbn:9780441013593 always refers to the novel Dune by Frank Herbert, but does not indicate where a library or bookstore keeps it. A standard URI and its component parts All URIs use a standard formatting structure. Each one starts by listing its scheme, which indicates what protocol an app should use to access the resource. For example, a URI that starts with http or https opens in a web browser, while one that starts with mailto creates a new message in an email client. Next, its authority component describes the resource's host by its domain name or IP address — it may also include a username, password, and port number. After that, a path locates the resource on the host within its folder structure. Finally, a URI may include query parameters that pass information onto a webpage or a fragment that jumps to a specific section within the file. Not all URIs must have every component, and a typical URN only includes the scheme and a path. Updated November 14, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/uri Copy

URL

URL Stands for "Uniform Resource Locator." A URL is the address of a specific webpage or file on the Internet. For example, the URL of the TechTerms website is "http://techterms.com." The address of this page is "http://techterms.com/definition/url" and includes the following elements: http:// – the URL prefix, which specifies the protocol used to access the location techterms.com – the server name or IP address of the server /definition/url – the path to the directory or file While all website URLs begin with "http," several other prefixes exist. Below is a list of various URL prefixes: http – a webpage, website directory, or other file available over HTTP ftp – a file or directory of files available to download from an FTP server news – a discussion located within a specific newsgroup telnet – a Unix-based computer system that supports remote client connections gopher – a document or menu located on a gopher server wais - a document or search results from a WAIS database mailto - an email address (often used to redirect browsers to an email client) file - a file located on a local storage device (though not technically a URL because it does not refer to an Internet-based location) You can manually enter a URL by typing it in the address bar of your web browser. For example, you might enter a website URL printed on a business card to visit the company's website. Most URLs, however appear automatically when you click on a link or open a bookmark. If the server name in the URL is not valid, your browser may display a "Server not found" error. If the path in the URL is incorrect, the server may respond with a 404 error. NOTE: URLs use forward slashes to denote different directories and cannot contain spaces. Therefore, dashes and underscores are often used to separate words within a web address. If your browser produces an error when you visit a specific webpage, you can double-check the URL for typos or other errors. If you find an error, you can manually edit the URL and press Enter to see if it works. Updated November 24, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/url Copy

USB

USB Stands for "Universal Serial Bus." USB is the most common type of port found on modern computers. It is used to connect various peripherals, such as keyboards, mice, game controllers, printers, scanners, and external storage devices. USB provides both data transmission and low voltage (5V) power over a single cable. Devices that require five volts or less can operate over USB without an external power source. A single USB bus supports up to 127 devices, using multiple USB hubs. Peripherals that require significant power, such as external hard drives, may require a powered USB hub, which connects to its own power source. USB has gone through many evolutions since its introduction in 1997. There are now several different versions and different types of USB connectors: Below are different USB standards and their corresponding ports and maximum data transfer rates. The port colors listed below are common but are not required and may differ between manufacturers. USB 1.0 (white USB-A and USB-B — very rare) → 1.5 Mbps USB 1.1 (white USB-A and USB-B, Mini-USB) → 12 Mbps USB 2.0 (black USB-A and USB-B, Mini-USB, Micro-USB) → 480 Mbps USB 3.0 / SuperSpeed (blue USB-A and USB-B, Micro-USB B) → 5 Gbps USB 3.1 / SuperSpeed+ (green USB-A and USB-B, USB Type C) → 10 Gbps USB 3.2 (USB Type C) → 20 Gbps USB 4 (USB Type C) → 40 Gbps Since USB devices, ports, and cables all use the same underlying USB communications standard, they are often compatible with physical adapters. However, when combining USB standards, the data transfer speed is limited to the lowest standard in the connection. Updated August 24, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/usb Copy

USB-C

USB-C Stands for "Universal Serial Bus Type-C." USB-C is a type of USB connector that was introduced in 2015. It supports USB 3.1, which means a USB-C connection can transfer data up to 10 Gbps and send or receive up to 20 volts or 100 watts of power. Unlike the previous USB Type-A and USB Type-B ports, the USB-C port is symmetrical, which means you never have to worry about plugging in the cable the wrong way. The USB-C connector is the most significant change to the USB connector since the USB interface was standardized in 1996. USB 1.1, 2.0, and 3.0 all used the same flat, rectangular USB-A connector. While there have been several variations of USB-B, such as Mini-USB and Micro-USB, they are all designed for peripheral devices, which connect to a Type-A port on the other end. The Type-C connector introduced with USB 3.1 is designed to be the same on both ends. There is no mini or micro version of USB-C, since the standard USB-C connector is about the same size of a Micro-USB connector. This means it can be used in small devices like smartphones and tablets. Since USB-C supports up to 100 watts of power, it can also be used as the power connector for laptops. In fact, the first laptops to include USB-C ports – the 2015 Apple MacBook and Google Chromebook Pixel – do not include power connectors. Instead, the power cable connects directly to the USB-C port. A USB-C connector will only fit in a USB-C port, but USB-C cables are backwards-compatible with other USB standards. Therefore, a USB-C to USB-A or USB-C to USB-B adapter can be used to connect older USB devices to a USB-C port. However, the data transfer rate and wattage will be limited to the lower standard. Updated March 20, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/usb-c Copy

Usenet

Usenet Usenet is collection of online discussions that are organized into newsgroups. Users may create their own discussion topics or contribute to existing threads within a topic. Some newsgroups provide forums for questions and answers, while others are designed primarily for file sharing. Usenet started in 1980 as a medium for discussing topics online, similar to a bulletin board system (BBS). However, instead of being hosted on a single server, Usenet newsgroups were hosted on hundreds of servers across the world. As the Internet grew in popularity, so did Usenet, extending online discussions to countries around the globe. While Usenet was designed for text-based discussions, it evolved to support file sharing as well. Today, Usenet is commonly used to distribute files (or "binaries" since the files may contain any type of binary data). Examples include videos, songs, pictures, and software programs. Usenet downloads commonly have an .NZB extension, which is an XML file that describes one or more locations where the actual file can be downloaded. Usenet data is transmitted over the Network News Transfer Protocol. Access to Usenet is available through a number of different Usenet providers, typically for around $10 USD per month. Examples include Usenet Storm, Newshosting, and Astraweb, among many others. Some services offer web-based tools to access and contribute to newsgroups, but you can also use a newsreader, such as Newsbin, Agent, and Pan. Updated February 8, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/usenet Copy

User Account

User Account A user account is the set of data that defines a user on a computer system. It includes a unique username that identifies it amongst other users and, combined with a password, allows the user to log in. It determines what actions they're allowed to do, lets them customize system settings and save their preferences, and may provide them with a dedicated place to store their files. A computer's operating system lets you choose which account to log in to when it starts, but you can switch accounts at any time by logging out or just returning to the Lock screen. If you're sharing a computer amongst several people, each person should have their own user account. This keeps their personal files and system preferences separate and secure from other users. Administrator, standard, and guest user accounts in the macOS System Settings app When you first create a user account, you may have several types of accounts to choose from. A standard user account can do most everyday tasks on a computer — installing applications, creating and editing documents, and accessing the Internet — but cannot access other users' files without permission or make significant changes to system settings. An administrator account has extra privileges that help them manage the computer. Administrators (also called root or superuser accounts) can control all system settings, access system files, and manage other user accounts. Every computer needs at least one administrator-level account. Finally, a guest user account is a limited account that does not require a password. It grants someone temporary access, but erases all user data once the guest logs out. Every user account stores several types of information: A user profile that saves personal settings, preferences, and other data specific to that user. For example, a custom desktop wallpaper, accessibility options, and a custom assortment of apps on the Start Menu or Dock. A Home directory for that user to store personal files that other accounts won't have access to. Common subfolders within Home include Documents and Downloads. A list of permissions and access rights that control what a user is allowed to do. This includes specific file and folder permissions that allow them to read, write, and execute files. Permissions settings may also determine whether a user can install applications or modify system settings. Which user groups the account belongs to. User groups let administrators set permissions for multiple users at once and typically apply to users with the same role. For example, a home computer may have two user groups — one group for parents that gives them full control over the computer, and another group for kids that is more limited and includes a curfew. Updated December 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/user_account Copy

User Experience

User Experience User experience, commonly abbreviated "UX," is the experience a person has using a product or service. In the technology world, this often refers to a hardware device or software program. A positive user experience is simple, intuitive, and enjoyable. A negative user experience is complex, confusing, and frustrating. Successful companies focus on creating a high-quality user experience. When a user enjoys using a product, he or she is likely to continue using the product and may recommend it to other people. Conversely, if a product frustrates a user, he or she may switch to another product and tell others to avoid using it. Therefore, producing a positive user experience benefits both the company and the end user. The focus of UX design is the user, rather than the product itself. The goal is to meet or exceed user expectations rather than to simply deliver a product. A product that provides a positive user experience is: functional intutive easy-to-use reliable enjoyable Software UX A software application that provides a good UX has an intuitive user interface. This means a typical user can learn the program by simply using it, rather than reading a manual or taking lessons. For example, a program with intuitive icons and simple menu bar options may be easy for a new user to understand. However, if a developer creates a program with non-standard icons and complex menu options, it will make the program less intuitive, likely resulting in a negative user experience. Website UX A website with a good UX is easy to view and simple to navigate. The font should be easy to read and the content should be clearly separated from any advertisements on the page. There should not be any unexpected popups or distracting animations on the page. The navigation should simple, which often includes a navigation bar at the top of the window as well as clear links within the content. Responsive design can help provide a good UX for both desktop and mobile users. If you want to create a website with a positive user experience, it may help to follow these web design rules. User experience is often associated with information technology, but it applies to all products. For example, automotive manufacturers strive to create a positive user experience for drivers. Retail stores aim to provide a positive user experience for customers. Since user needs and wants are constantly evolving, companies must listen and respond to user feedback. Successful companies continually update their products and services to meet customer expectations. Updated May 4, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/user_experience Copy

User ID

User ID A user ID is a unique identifier, commonly used to log on to a website, app, or online service. It may be a username, account number, or email address. Many websites require an email address for the user ID. This provides two benefits: It is a simple way of ensuring you select a unique username. It automatically associates your email address with your account. Some services require you to choose a user ID that is not your email address. Instagram and Snapchat, for example, require you to select a custom username for your profile. Services that require a username and email address may allow you to log in using either identifier, since they are both unique. User ID vs Username In many cases, the terms "user ID" and "username" are synonymous. For example, a website may provide a login interface with two fields labeled Username and Password. Another website may label the two fields as User ID and Password, which refer to the same thing. Technically, however, usernames are a subset of user IDs, since a user ID may be an email address, number, or other unique identifier that is not necessarily a name. Updated April 4, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/user_id Copy

User Interface

User Interface A user interface, also called a "UI" or simply an "interface," is the means in which a person controls a software application or hardware device. A good user interface provides a "user-friendly" experience, allowing the user to interact with the software or hardware in a natural and intuitive way. Nearly all software programs have a graphical user interface, or GUI. This means the program includes graphical controls, which the user can select using a mouse or keyboard. A typical GUI of a software program includes a menu bar, toolbar, windows, buttons, and other controls. The Macintosh and Windows operating systems have different user interfaces, but they share many of the same elements, such as a desktop, windows, icons, etc. These common elements make it possible for people to use either operating system without having to completely relearn the interface. Similarly, programs like word processors and Web browsers all have rather similar interfaces, providing a consistent user experience across multiple programs. Most hardware devices also include a user interface, though it is typically not as complex as a software interface. A common example of a hardware device with a user interface is a remote control. A typical TV remote has a numeric keypad, volume and channel buttons, mute and power buttons, an input selector, and other buttons that perform various functions. This set of buttons and the way they are laid out on the controller makes up the user interface. Other devices, such as digital cameras, audio mixing consoles, and stereo systems also have a user interface. While user interfaces can be designed for either hardware of software, most are a combination of both. For example, to control a software program, you typically need to use a keyboard and mouse, which each have their own user interface. Likewise, to control a digital camera, you may need to navigate through the on-screen menus, which is a software interface. Regardless of the application, the goal of a good user interface is to be user-friendly. After all, we all know how frustrating it can be to use a device that doesn't work the way we want it to. Updated March 31, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/user_interface Copy

User Space

User Space User space is system memory allocated to running applications. It is often contrasted with kernel space, which is memory allocated to the kernel and the operating system. Separating user space from kernel space protects the system from errant processes that could use up memory required by the operating system (OS). The result is a more stable system where memory leaks and program crashes do not affect the OS. It is an important aspect of sandboxing, which is common in modern operating systems. When you open an application (or "excecutable file"), the OS loads the program and required resources into the user space. Any plug-ins and required libraries are loaded into the user space as well. If you open and edit a document, that file will also be temporarily loaded into the user space. When you save the file, the data is written from the user space to a storage device such as an HDD or SSD. NOTE: The more RAM your computer has, the more user space is available. Having extra RAM allows you to run more applications simultaneously without your computer slowing down because of a memory bottleneck. Updated October 31, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/user_space Copy

User-Friendly

User-Friendly User-friendly describes a hardware device or software interface that is easy to use. It is "friendly" to the user, meaning it is not difficult to learn or understand. While "user-friendly" is a subjective term, the following are several common attributes found in user-friendly interfaces. Simple. A user-friendly interface is not overly complex, but instead is straightforward, providing quick access to common features or commands. Clean. A good user interface is well-organized, making it easy to locate different tools and options. Intuitive. In order to be user-friendly, an interface must be make sense to the average user and should require minimal explanation for how to use it. Reliable. An unreliable product is not user-friendly, since it will cause undue frustration for the user. A user-friendly product is reliable and does not malfunction or crash. The goal of a user-friendly product is to provide a good user experience (or "UX"). This may look different depending on the end user for whom the product is designed. For example, a user-friendly kid's game will have a much different interface than a professional CAD program. However, the rules above apply to both types of software. Even if a program has many advanced features, it is still possible to make it user-friendly by designing a simple, clean, and intuitive interface. User-friendly products are typically more successful than those with complex, convoluted interfaces that are difficult to use. Additionally, customers often avoid unreliable products, such as software programs that are full of bugs. In order to ensure a good user experience, companies often thoroughly test their products before releasing them to the public. Updated January 29, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/user-friendly Copy

Username

Username A username is a unique identifier for a user's account on a computer system. A username combined with a password is one of the most common authentication methods used to login to computers, services, and websites. While some systems assign each person a username automatically, most allow a user to choose their username — as long as their choice is not already taken by someone else. Most computer operating systems support multiple user accounts, with unique usernames for each one. Windows, macOS, Unix, and Linux are all multi-user operating systems that allow each account to have its own set of private user folders, run its own applications, and customize its own system settings. Special accounts for administrators may use a default username, like 'admin' or 'root.' A username, combined with a password, is the most common form of user authentication Websites and other online services may also require each user to have a unique username. For example, your email provider requires you to choose a username that will form the first part of your email address (before the @ symbol). Social media sites require a username not only to identify each user to the system itself but also to other users. Posts, comments, messages, and other user-generated content can be associated with a particular username, even if the username is anonymous, so other users can identify which account posted the content. Some websites and services allow you to use your full email address as a username. In these cases, your password is still unique to each site. NOTE: Your username on a system or website is visible to others, so you don't need to keep it secret unless you want the account to be anonymous. Your password, on the other hand, should always be kept private. Updated June 9, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/username Copy

UTF

UTF Stands for "Unicode Transformation Format." UTF refers to several types of Unicode character encodings, including UTF-7, UTF-8, UTF-16, and UTF-32. UTF-7 - uses 7 bits for each character. It was designed to represent ASCII characters in email messages that required Unicode encoding. UTF-8 - the most popular type of Unicode encoding. It uses one byte for standard English letters and symbols, two bytes for additional Latin and Middle Eastern characters, and three bytes for Asian characters. Additional characters can be represented using four bytes. UTF-8 is backwards compatible with ASCII, since the first 128 characters are mapped to the same values. UTF-16 - an extension of the "UCS-2" Unicode encoding, which uses two bytes to represent 65,536 characters. However, UTF-16 also supports four bytes for additional characters up to one million. UTF-32 - a multibyte encoding that represents each character with 4 bytes. Most text in documents and webpages is encoded using one of the UTF encodings above. Many word processing programs do not allow you to view the character encoding of open documents, though some display the encoding on the bottom of the document window or within the file properties. If you want to see the type of character encoding used by a webpage, you can select View → View Source to view the HTML of the page. The character encoding, if defined, will be in the header section, near the top of the HTML. A page that uses UTF-8 encoding may include one of the following text snippets below, depending on the version of the HTML. XHTML:   HTML 5: Updated April 20, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/utf Copy

Utility

Utility A utility, or utility program, is a software program that helps configure, monitor or maintain a computer and its operating system. Most utilities are smaller programs, and many are included with an operating system. Utility programs often run as background processes, only occasionally requiring user interaction. Many categories of utility programs are available that keep your computer running well, secure it from threats, and even add new functionality. The following categories are the ones that computer users interact with the most. Antivirus software is one of the most important types of utility, scanning files on your computer for viruses and removing any that it finds. Antivirus software runs constantly in the background and alerts you to anything it finds. Windows includes a built-in antivirus utility, but third-party options are available. Backup software is also an extremely important type of utility. It regularly backs up documents in certain locations to an external drive or cloud storage, allowing you to restore lost or damaged files to older versions. Windows and macOS both include local backup utilities and limited cloud backup storage. System Monitoring software displays information about your computer's processor, memory, storage, and networking activity. They can also list every program running on your computer, show how many resources each program uses, and even provide a way to end unresponsive or unwanted programs. The Windows Task Manager and the macOS Activity Monitor are each operating system's primary system monitor utility. Disk utility software can scan disks for errors, manage disk partitions, and format disk volumes. Some disk utilities can provide a map of all the files on a computer that shows which folders and file types take up the most space. Network utilities can monitor incoming and outgoing network traffic, provide a map of network infrastructure, and scan for nearby wireless networks. They can also write log files of a computer's network activity. While that list covers the most commonly used types of utility, there are plenty of small utilities that don't fit into those categories. Windows and macOS also include dedicated applications to manage fonts, encrypt disks, take screenshots, and configure other specialized system settings. Updated December 9, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/utility Copy

UUID

UUID Stands for "Universally Unique Identifier." A UUID is a 128-bit number used to identify a unique object on a computer system. Examples include objects in software programs, parameters in URLs, and labels of individual hardware devices. Most UUIDs are represented in hexadecimal notation, using 32 characters, separated by four hyphens. Below is an example UUID: 90f51180-158b-11eb-adc1-0242ac120003 While there are several ways to generate a UUID, the first string of characters are typically based on the timestamp (the current time) of the host system. The first UUID standard (version 1), commonly used in the 1990s, had the following format: first 8 characters - "low" 32 bits of the timestamp next 4 characters - "middle" 16 bits of the timestamp next 4 characters - "high" 16 bits of the timestamp next 4 characters - 16-bit "variant" in the clock sequence final 12 characters - 48-bit host ID Combining a random 48-bit host ID with a 64-bit timestamp and a 16-bit "variant" value makes it almost impossible for two UUIDs to be identical. Using the fourth version of the UUID standard (UUID4), the chances of generating two equal UUIDs is 1 in 1037. Even if millions of UUIDs are generated every second, it is unlikely a duplicate UUID will be created for several decades. UUID vs GUID In most cases, UUID and GUID are synonymous. A GUID (Globally Unique Identifier) has the same number of digits and the same format as a UUID. However, Microsoft often uses the term GUID (rather than UUID) to refer to unique identifiers on Windows devices. The Microsoft .NET Framework includes the function NewGuid(), which generates a GUID within a software program. Updated October 23, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uuid Copy

UWP

UWP Stands for "Universal Windows Platform." UWP is an application development platform created by Microsoft. A UWP app can run on multiple devices, including Windows PCs, tablets, and smartphones. Some UWP apps can run on other types of Microsoft hardware including Xbox, HoloLens, and IoT devices. UWP provides a common platform for developers to build apps for multiple types of hardware. The Universal Windows Platform API includes a wide range of libraries, functions, and user interface elements developers can integrate into their apps. By including multiple DeviceFamily types in a UWP app, a developer can customize the interface of the app for several types of devices. Microsoft Visual Studio 2015 and later supports the UWP API and provides developers with a flexible multiplatform development environment. While the API is implemented in C++, UWP apps can be written in C++, C#, Visual Basic, or even JavaScript. The Microsoft Visual Studio IDE will compile the code as a UWP app if Windows.Universal is set as the target device family. UWP and the Microsoft Store When the Microsoft Store (formerly the Windows Store) launched in 2012, developers could only submit UWP apps. Programs built on earlier platforms, such Windows Forms and WPF, were not allowed. To expand the range of apps available on the Store, Microsoft later created "Desktop Bridge," which allows developers to package non-UWP apps for the Microsoft Store. NOTE: UWP was introduced with Windows 10 and is not backwards compatible. Therefore, UWP apps will only run on PCs running Windows 10 and later, as well as other supported devices. Updated August 20, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/uwp Copy

Vaporware

Vaporware Vaporware is hardware or software that has been announced, but has missed its release date by a significant amount of time. It includes both products that are eventually released and products that are never released at all. Vaporware is typically produced when a company is over-aggressive in announcing a product release date. Since technology is a competitive field, one business can sometimes gain an advantage over another by announcing their product will be available first. However, unforeseen delays can push back the release date significantly, leading both bloggers and news publishers alike to dub the product as "vaporware." There is no official length of delay that makes a product vaporware, but the term often surfaces after a delay of three months or more. It also is commonly used after a company postpones the release date for the second time. While vaporware can sometimes work in a company's favor by generating extra buzz, it typically diminishes the product hype, especially after a long delay. When a company has multiple products turn into vaporware, it can tarnish the corporate image and negatively affects the company's credibility with consumers. Therefore, most businesses make a strong effort to release their products on time. An example of a recent vaporware product is the white iPhone 4, which was announced in early June of 2010, but has been delayed several times since then. Updated October 29, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vaporware Copy

Variable

Variable In mathematics, a variable is a symbol or letter, such as "x" or "y," that represents a value. In algebraic equations, the value of one variable is often dependent on the value of another. For example, in the equation below, y is the "dependent variable" because its value is based on the value assigned to the "independent variable" x. y = 10 + 2x The equation above may also be called a function since y is a function of x. If x = 1, then y = 12. If x = 2, then y = 14. Variables are also used in computer programming to store specific values within a program. They are assigned both a data type as well as a value. For example, a variable of the string data type may contain a value of "sample text" while a variable of the integer data type may contain a value of "11". Some programming languages require variables to be declared before they can be used, while others allow variables to be created on the fly. The data type, if not defined explicitly, is determined based on the initial value given to the variable. A function within a program may contain multiple variables, each of which may be assigned different values based on the input parameters. Likewise, a variable may also be used to return a specific value as the output of a function. In the Java example below, the variable i is incremented during each iteration of the while loop, and x is returned as the output. while (i < max) {   x = x+10;   i++; } … return x; As the name implies, the value of a variable can change. Variables that store values that do not change are called constants. Updated May 3, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/variable Copy

VCI

VCI Stands for "Virtual Channel Identifier." The VCI, used in conjunction with the VPI (virtual path indicator), indicates where an ATM cell is to travel over a network. ATM, or asynchronous transfer mode, is a method that many ISPs (Internet Service Providers) use to transfer data to client computers. Because ATM sends packets over fixed channels, the data is easier to track than information sent over the standard TCP/IP protocol. The VCI within each ATM cell defines the fixed channel on which the packet of information should be sent. It is a 16-bit field, compared to the VPI, which is only 8 bits. Since this numerical tag specifies the virtual channel that each packet belongs to, it prevents interference with other data being sent across the network. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vci Copy

VDSL

VDSL Stands for "Very high bit rate Digital Subscriber Line." VDSL (sometimes called "VHDSL") is a DSL standard that provides high speed Internet access. It is an improved version of ADSL (asymmetric DSL) that offers download speeds of up to 52 Mbps (6.5 megabytes per second) and upload speeds up to 16 Mbps (2 megabytes per second). Like previous DSL standards, VDSL operates over copper wires and can be deployed over existing telephone wiring. This means an ISP can provide VDSL Internet access to users as long as they have a landline and are located within a specific proximity required by the ISP. The connection is established through a VDSL modem, which connects to a computer or router on one end a telephone outlet on the other. VDSL was developed to support the high bandwidth requirements of HDTV, media streaming, and VoIP connections. By providing downstream transmission rates of over 50 Mbps, VDSL has enough bandwidth to support all of these connections simultaneously. This fast data transfer rate also allows VDSL to compete with cable Internet providers, which have historically offered faster Internet access speeds. The first version of VDSL (ITU standard G.993.1) was approved in 2004 and is still commonly offered by Internet service providers. However, an updated version called VDSL2 (ITU standard G.993.2) was introduced in 2006 and offers symmetric download and upload speeds of 100 Mbps. VDSL2 is typically provided to businesses and other organizations that require high-speed connections for multiple systems. Updated February 22, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vdsl Copy

VDU

VDU Stands for "Visual Display Unit." A VDU displays images generated by a computer or other electronic device. The term VDU is often used synonymously with "monitor," but it can also refer to another type of display, such as a digital projector. Visual display units may be peripheral devices or may be integrated with the other components. For example, the Apple iMac uses an all-in-one design, in which the screen and computer are built into a single unit. Early VDUs were primarily cathode ray tube (CRT) displays and typically had a diagonal size of 13 inches or less. During the 1990s, 15" and 17" displays became standard, and some manufacturers began producing displays over 20" in size. At the turn of the century, flat panel displays became more common, and by 2006, CRT displays were hard to find. Today, it is common for computers to come with VDUs that are 20" to 30" in size. Thanks to the recent growth in LCD, plasma, and LED technology, manufacturing large screens is much more cost effective than before. Updated November 6, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vdu Copy

Vector

Vector Mathematically, a vector is a quantity, defined by both magnitude and direction. For example, a vector could be illustrated by an 1 inch arrow pointing at a 30 degree angle. Another vector may be 2.5 inches and point at a 160 degree angle. In the computer world, vectors are used to define paths in certain types of images, such as EPS files and Adobe Illustrator documents. These images are often called vector graphics since they are comprised of vectors, or paths, instead of dots. Vector graphics can be scaled larger or smaller without losing quality. In computer science, a vector may refer to a type of one dimensional array. For example, a vector called "fibonacci" that stores the first six values of the Fibonacci sequence would be defined as follows: fibonacci[0] = 0, fibonacci[1] = 1, fibonacci[2] = 1, fibonacci[3] = 2, fibonacci[4] = 3, fibonacci[5] = 5 Vectors are similar to arrays, but unlike arrays, vectors use their own memory management mechanisms. Arrays are restricted to the memory structure supplied by the programming language they are created in, typically called a stack. Vectors have a more dynamic structure, often referred to as a heap, which gives them greater flexibility in how they use memory. While an array uses a static amount of memory, the memory used by the vector can be increased or decreased as elements are added or removed from the vector. Updated November 7, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vector Copy

Vector Graphic

Vector Graphic A vector graphic is a digital image comprised of points, paths, curves, and geometric shapes. Unlike raster graphics, which use a fixed grid of pixels, vector graphic files have no fixed resolution and can be resized, stretched, and otherwise manipulated without any quality loss. Graphic designers and illustrators use vector graphics for simple drawings, logos, icons, and illustrations. The individual characters in a font file are also vector graphics. Paths in a vector graphic consist of points linked together. A point along a path can represent a sharp corner or a smooth curve (known as a bézier curve). When a path forms a closed loop by linking the end back to the beginning, it becomes a solid shape. You can adjust a path's stroke to change its width, style, and color; you can also fill an enclosed shape with a color or gradient. A vector graphic's file size is typically much smaller than that of a raster graphic. A vector graphic file saves the mathematical information that defines the paths and shapes, along with their stroke and fill properties. Since this information is all stored as numerical values instead of individual pixels, a complicated vector graphic will have a smaller file size than a similarly-detailed raster graphic. NOTE: Common vector graphic file types include Adobe Illustrator (.AI), Encapsulated PostScript (.EPS), and Scalable Vector Graphic (.SVG). Updated February 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vector_graphic Copy

Ventura

Ventura macOS Ventura (macOS 13) is the 19th version of Apple's macOS operating system. It followed macOS Monterey (macOS 12) and was released to the public on October 24, 2022. macOS 13 Ventura includes a few significant user interface changes. The most notable update is the introduction of Stage Manager, a new window management tool. It works much like the Spaces desktop manager and allows you to arrange app windows into groups and quickly switch between them. The window groups appear as thumbnails on the left edge of the screen, so you can see at a glance what apps and windows are in each one. Each desktop in Spaces can have its own set of Stage Manager window groups. Ventura also gave the System Preferences window a significant update. Now renamed to System Settings, it features a new design similar to the Settings apps on iOS and iPadOS. The new app reorganizes Settings options into new categories, accessible via a navigation sidebar. macOS Ventura includes several new built-in apps: Freeform is a collaborative canvas app that you can use for creating quick sketches and taking notes. You can see the contributions of others in real time, just like working on a real whiteboard. Freeform is also available for iOS and iPadOS. A new Weather app, based on the one from iOS, is now available on the Mac. It shows animated weather conditions, hourly and 10-day forecasts, and detailed weather maps. The Clock app from iOS and iPadOS also comes to the Mac. You can set alarms and timers in the app or through Siri, and see world clocks on a built-in map. Several OS-level improvements also come with macOS Ventura. Some of those improvements include: Continuity Camera now allows you to use an iPhone's rear-facing camera as a webcam for FaceTime calls and any other video-calling app. It works wirelessly and automatically once the Mac and iPhone are near each other with Wi-Fi and Bluetooth enabled. This feature requires an iPhone XR or newer. Passkey support is added to Safari, so you can now sign in to supported websites without using a password. iCloud Shared Photo Library lets multiple members of an iCloud Family Sharing plan use a shared photo library. Anyone in the family can add, edit, and delete photos in the shared library. Messages gains the ability to edit and un-send recent messages. Mail adds "send later" and "undo send" options. macOS Ventura supports MacBook Pros, iMacs, and iMac Pros released in 2017 and later, MacBook Airs and Mac Minis released in 2018 or later, Mac Pros released in 2019 or later, and the Mac Studio. Updated February 3, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/ventura Copy

Veronica

Veronica Short for "Very Easy Rodent-Oriented Netwide Index to Computerized Archives." Veronica was a tool developed in the early 1990s for searching Gopher servers. The "rodent-oriented" acronym reflects how Veronica enabled keyword searches across Gopher, an information network structured as menus and text files. The Gopher system, developed at the Univerity of Minnesota, grew to over 6,000 global servers at its peak. Browsing the network was time-consuming and difficult. Searching the network with Veronica was more efficient as it quickly listed Gopher menu items and documents that matched search phrases. Unlike modern search engines, which scan entire documents for content, Veronica only indexed the titles of Gopher menus and items. The lightweight approach allowed it to query multiple servers quickly but produced limited results. Users could employ basic logical operators like AND, NOT, and OR to refine searches, and use an asterisk (*) for wildcard searches. These features were akin to Unix search syntax, making Veronica intuitive for tech-savvy users at the time. ? Later versions of Veronica extended beyond Gopher to include searches of select websites, newsgroups, and FTP servers. It briefly bridged the gap between Gopher's text-based environment and the Web. Although its influence waned with the decline of Gopher, Veronica had a significant impact on the future of the Internet. The tool's popularity made it clear that users preferred to search, rather than browse, for information online. This shift led to the development of early search engines like WebCrawler, Lycos, Infoseek, AltaVista, and Excite. Yahoo, which started as a directory in 1994, transitioned to a search-focused site a few years later, followed by the launch of Google in 1998. Updated November 13, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/veronica Copy

Version Control

Version Control Version control is used to manage multiple versions of computer files and programs. A version control system, or VCS, provides two primary data management capabilities. It allows users to 1) lock files so they can only be edited by one person at a time, and 2) track changes to files. If you are the only person editing a document, there is no need to lock a file for editing. However, if a team of developers is working on a project, it is important that no two people are editing the same file at the same time. When this happens, it is possible for one person to accidentally overwrite the changes made by someone else. For this reason, version control allows users to "check out" files for editing. When a file has been checked out from a shared file server, it cannot be edited by other users. When the person finishes editing the file, he can save the changes and "check in" the file so that other users can edit the file. Version control also allows users to track changes to files. This type of version control is often used in software development and is also known as "source control" or "revision control." Popular version control systems like Subversion and CVS allow developers to save incremental versions of programs and source code files during the development process. This provides the capability to rollback to an earlier version of the program if necessary. For example, if bugs are found in a new version of a software program, the developer can review the previous version when debugging the code. Version control software requires that all files are saved in a central location. This location is called the repository and contains all previous and current versions of files managed by the VCS. Whenever a new file is created or a current file is updated, the changes are "committed" to the repository, so the latest version is available to all users. Updated August 18, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/version_control Copy

Vertical Market Software

Vertical Market Software A vertical market is one that supplies goods to a specific industry. For example, a MIDI keyboard manufacturer develops products for a vertical market since the keyboards are only used by people who want to create music on their computers. Vertical market software, therefore, is software developed for niche applications or for a specific clientele. For example, investment, real estate, and banking programs are all vertical market software applications because they are only used by a specific group of people. Scientific analysis programs, screenplay writing programs, and programs used by medical professionals are also vertical market software because they cater to a specific audience. There is typically not a lot of competition in vertical markets, but it can still be a risky industry since developers are highly dependent on specific clients to buy their products. The opposite of vertical market software is horizontal market software, which is developed for the general public or multiple industries. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/verticalmarketsoftware Copy

VFAT

VFAT Stands for "Virtual File Allocation Table." VFAT is an extension to the FAT16 and FAT32 file systems that adds support for long file names. VFAT extends the FAT file systems but does not break backward compatibility. Operating systems that support it (like Windows 95 and later) can use long file names, while ones that don't (like MS-DOS and Windows 3.1) will see short, truncated names when viewing the same volume. VFAT was introduced in Windows 95 to add support for long file names as part of an effort to make computers more user-friendly. The FAT16 file system used in DOS and Windows operating systems was limited to 8.3 file names — 8 characters with a file extension of 3 letters — but VFAT allowed file names up to 255 characters, including spaces, commas, multiple periods, and some other extended characters. VFAT works by creating separate entries in a directory for the long file name and for a shortened 8.3 version. The long file name entry is formatted in a way that older operating systems will ignore in favor of the 8.3 file name. For example, a file with the long file name Audrey's Birthday Party.jpg will be shortened to AUDREY~1.JPG when viewed in an operating system that does not support VFAT. NOTE: While the FAT32 file system does not support long file names as part of the specification, every operating system that supports it also supports VFAT. Therefore, someone using a FAT32 volume can expect long file name support. Updated October 28, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vfat Copy

VGA

VGA Stands for "Video Graphics Array." VGA is a computer graphics standard for displaying color graphics. The original specification supports a resolution of 640 x 480 at 16 colors or 320 x 200 at 256 colors. It also specifies the accompanying 15-pin VGA display connector. IBM originally developed the VGA standard in 1987 for their PS/2 computer system, and the rest of the IBM compatible PC industry soon adopted it as the standard. Due to its long-lasting popularity, the 640 x 480 16-color display mode is still considered the lowest common standard that a computer supports. The development of more-powerful graphics cards resulted in extensions to VGA that allowed higher resolutions, like SVGA (800 x 600) and XGA (1024 x 768), as well as higher color depth up to 24-bit (more than 16 million colors). The 15-pin VGA connecter was the standard connection between computers and monitors (as well as projectors, televisions, and other displays) from its introduction until its eventual replacement by DVI, HDMI, and DisplayPort. A VGA connection sends video signals from a computer to a monitor through an analog connection, making the video signal vulnerable to interference from other electronic devices and signal degradation over long cables. Updated October 26, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vga Copy

Video Card

Video Card A video card is a hardware component responsible for rendering graphics and sending them to the computer's monitor. It includes a GPU, video memory, and at least one physical port for connecting an external monitor. A video card typically fits into a computer's PCIe x16 slot, although some low-end cards use an x8 slot instead. Not all computers have a discrete video card — some use CPUs with an integrated GPU and include a port for the monitor on the motherboard's I/O panel. The GPU on a video card is responsible for drawing a computer's graphics, both 2D and 3D. Its memory, or VRAM, stores temporary graphics-related data, including the frame buffer, textures, and other visual assets that the GPU needs quick access to. A video card often includes dedicated cooling for the GPU, like a heat sink and fan, that takes up enough space so that the video card occupies two expansion slots. Some video cards require more power than the PCIe slot can provide and connect directly to the computer's power supply. Finally, a video card includes at least one physical port to send a video signal to a monitor — typically HDMI or DisplayPort. Many video cards have multiple ports to connect to several monitors at once. A Gigabyte video card showing several DisplayPort and HDMI ports and multiple cooling fans While many people think that upgrading a video card is only for improving a computer's 3D graphics in video games, a new video card can improve 2D performance as well. For example, upgrading from an integrated GPU to a discrete video card can improve performance by adding dedicated VRAM, freeing up system RAM for other uses. New GPUs can also add hardware decoding support for new video codecs, enabling smooth video playback at high resolutions like 4K without straining the CPU. Updated September 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/video_card Copy

VIP

VIP Stands for "Virtual IP Address." A VIP (or VIPA) is a public IP address that may be shared by multiple devices connected to the Internet. Internally, each device has a unique local IP address, but externally, they all share the same one. VIPs are common in home and office networks. When a device connects to the network, the router assigns it a unique local IP address, typically via DHCP. Examples of local IP addresses include 192.168.0.2, 192.168.0.3, etc, with the router IP address set to 192.168.0.1. Some routers use the IP address 10.0.1.1 and assign IP addresses 10.0.1.2, 10.0.1.3, etc. Local IPs are merged into a single public (or "virtual") IP using network address translation (NAT). The virtual IP address is what identifies the devices on the Internet. A VIP often has a one-to-many relationship with devices on a network. Therefore, multiple devices connected to the same router may have the same IP address on the Internet. VIPs are also used by servers. For example, multiple web servers may share the same IP address, allowing them to distribute requests across multiple machines. This is useful for load balancing and redundancy, A "high availability" server, for instance, may have a single IP address shared by two separate computers. Updated March 7, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vip Copy

Viral

Viral Contrary to what you might think, the term "viral" has nothing to do with computer viruses. Instead it refers to a digital video, image, or article that has spiked in popularity and has reached a large number of users in a short period of time. While there is no exact number of views that makes something "go viral," most viral media is viewed by more than a million people in less than a week. Viral videos are the most common type of viral media. Most viral videos are posted on YouTube, which provides free video hosting. Since YouTube has become the central location to view videos on the web, homemade videos have the potential to be viewed by millions of people around the world. While some users have success promoting their YouTube videos from other websites, most viral videos gain popularity by word of mouth. For example, if someone comes across a video that he thinks is especially amusing or shocking, he might forward the link to his friends. If his friends also like the video, they might tell their friends, who may tell other friends, etc. For a video to become viral, it needs to reach a certain threshold that might be considered a "tipping point." Once it reaches this level of popularity, the number of views or "hits" spikes upwards and the video becomes viral. This may happen when the video reaches YouTube's "Most Popular" list or is picked up and shown by a national news network. Once this happens, the already popular video gets even more publicity and goes viral. Updated February 9, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/viral Copy

Virtual Machine

Virtual Machine A virtual machine (or "VM") is an emulated computer system created using software. It uses physical system resources, such as the CPU, RAM, and disk storage, but is isolated from other software on the computer. It can easily be created, modified, or destroyed without affecting the host computer. Virtual machines provide similar functionality to physical machines, but they do not run directly on the hardware. Instead, a software layer exists between the hardware and the virtual machine. The software that manages one or more VMs is called a "hypervisor" and the VMs are called "guests" or virtualized instances. Each guest can interact with the hardware, but the hypervisor controls them. The hypervisor can start up and shut down virtual machines and also allocate a specific amount of system resources to each one. You can create a virtual machine using virtualization software. Examples include Microsoft Hyper-V Manager, VMware Workstation Pro, and Parallels Desktop. These applications allow you to run multiple VMs on a single computer. For example, Parallels Desktop for Mac allows you to run Windows, Linux, and macOS virtual machines on your Mac. VMs are ideal for testing software since developers can install one or more applications and revert to a saved state (or "snapshot") whenever needed. Testing software on a regular operating system can cause unexpected crashes and may leave some files lingering behind after the software is uninstalled. It is safer to test software on a virtual machine that is isolated from the operating system and can be fully reset as needed. Cloud-Based Virtual Machines As cloud services have grown in popularity, cloud-based VMs have become increasingly popular as well. "Cloud instances," as they are often called, run on a computer that is accessed over the Internet. The VM is often controlled through a web browser or a remote access utility. Cloud-based VMs are a common way for companies to test software deployments since they can test on dozens of machines without hosting the VMs locally. Updated September 21, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/virtual_machine Copy

Virtual Memory

Virtual Memory All computers have physical memory, or RAM, which loads the operating system and active programs. In some cases, RAM is soldered to the motherboard, while in others, it is installed using removable memory modules. Either way, the system memory is limited to the total RAM installed. Modern PCs typically have between 4 GB and 32 GB of RAM. When you open an application, your computer loads the app and related files into RAM. If you open enough programs at one time, your computer will eventually run out of memory. Instead of producing an "out of memory" error or forcing programs to quit, your computer can "swap" physical memory into virtual memory. Virtual memory is space on a storage device, such as an SSD or HDD, that serves as temporary system memory, similar to RAM. This "secondary memory" can expand a computer's memory beyond the physical memory limit. However, since storage devices are much slower than the RAM, data stored in virtual memory is often swapped back to RAM when accessed. Swapping data between the primary memory takes time. Therefore, the more your computer uses virtual memory, the more it will slow down. One of the best ways to improve your computer's performance is to install additional RAM. With more free memory, your computer won't need to use virtual memory as often, resulting in fewer slowdowns and faster overall speed. Updated March 3, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/virtualmemory Copy

Virtual Reality

Virtual Reality Virtual reality, abbreviated as "VR," is a technology that creates an immersive, interactive computer-generated world. The viewer experiences virtual reality while wearing a special headset that removes their vision of the outside world and replaces it with the virtual one, providing the illusion of looking around and moving through the simulated environment. Standard VR equipment consists of a headset equipped with a stereoscopic display (a separate screen for each eye), headphones or small speakers, and special sensors that detect the direction you're looking. The headsets also connect to handheld controllers that allow you to control your movement and interact with the environment. Some VR headsets are standalone devices containing their own CPU, GPU, battery, and storage. Other headsets connect to a computer that renders the environment and provides the power; these headsets can be smaller than the standalone ones since much of the work happens on the computer, but they require a wired connection that can be cumbersome. Many fields can make use of virtual reality. In addition to entertainment like video games and 360º video, VR headsets are also commonly used in education to provide students with more immersive instruction. VR is used in medical schools and hospitals to teach and practice complex surgeries, and businesses of all kinds use VR as a CAD and prototyping tool. VR equipment and experiences also form the basis of the concept of a metaverse — a shared virtual reality environment where people (represented by avatars) can meet and interact with each other. Updated January 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/virtualreality Copy

Virtualization

Virtualization Virtualization can refer to a variety of computing concepts, but it usually refers to running multiple operating systems on a single machine. While most computers only have one operating system installed, virtualization software allows a computer to run several operating systems at the same time. For example, a Windows computer with VMware Workstation installed can run Linux within the Windows interface. Similarly, a Macintosh computer can use Parallels Desktop to run Windows within the Mac OS X interface. When another operating system (OS) is running on top of the main system, it is called a "virtual machine." This is because it acts like a typical computer but is actually running on top of another operating system. Windows 11 running in macOS via Parallels virtualization software Virtualization software acts as a layer between a computer's primary OS and the virtual OS. It allows the virtual system to access the computer's hardware, such as the RAM, CPU, and video card, just like the primary OS. This is different than emulation, which actually translates each command into a form that the system's processor can understand. When Apple started using the same x86 processor architecture as Windows computers, it became possible to run both macOS and Windows on the same machine via virtualization, rather than emulation. Now that Apple has transitioned to Apple Silicon processors, which use the ARM architecture, only the ARM version of Windows can run "virtualized" in macOS. The x86 version of Windows must be emulated when run on an Apple Silicon machine. Another type of virtualization involves connecting to a remote computer system and controlling it from your computer. This is commonly referred to as remote access. Updated January 15, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/virtualization Copy

Virus

Virus A virus is a script or piece of malicious software that moves from one computer to another. It may spread through a social engineering attack that tricks a user into opening an infected file, or it may exploit software vulnerabilities to infect a computer without the user doing anything. A virus may delete or modify files on your computer or cause other performance problems as it consumes processing and memory resources. It may also expose your computer to unauthorized remote access, allowing hackers access to your personal files. Infected email attachments are one common method of spreading a virus. Software downloaded from unofficial websites or pirate software networks may also contain viruses. Phishing emails and other social engineering attacks can also convince someone to install infected software or open infected files. Other viruses exploit bugs in operating systems and applications to spread without user intervention — for example, visiting a malicious website in a vulnerable browser. Hackers have many reasons to write and spread viruses. Many do so for financial gain, either to steal valuable information (like bank account numbers and passwords) or to extort victims using ransomware. Some spread viruses to add computers to a botnet for participating in DDoS attacks. State-sponsored hackers create targeted viruses for militaries and intelligence organizations that can steal state secrets and damage infrastructure as part of a cyber warfare attack. Some hackers create a virus to create a name for themselves or to prove their worth before joining a hacking group, while some do so simply to cause harm by destroying data and crashing servers. Windows 11 includes built-in antivirus software A virus infection may be difficult to remove, so it's best to prevent one before it happens. Antivirus utilities check all incoming files for viruses and run recurring full-disk scans to examine existing files. Keeping antivirus software and virus definitions up-to-date ensures that it knows about new viruses as they're discovered. Finally, it's wise to avoid files and programs from suspicious or unknown sources (particularly email attachments) and install updates to your operating system and applications that patch vulnerabilities. Updated December 26, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/virus Copy

Virus Definition

Virus Definition A virus definition is binary pattern (a string of ones and zeros) that identifies a specific virus. By checking a program or file against a list of virus definitions, antivirus software can determine if the program or file contains a virus. Most antivirus and Internet security programs reference a database of virus definitions when scanning files for viruses. This is an effective way to detect known viruses. However, when new viruses are created, antivirus software may not recognize them. Therefore, most antivirus programs automatically update the virus definitions from an online database on a regular basis (such as once a week). Some antivirus programs use known virus definitions to generate heuristics that can detect unknown viruses. These viruses may not match a virus definition exactly, but they may be similar enough that the antivirus software can mark the file as a possible virus. While this offers extra protection against unknown viruses, it can also produce "false positives," labeling files as potentially harmful when they do not contain viruses. The accuracy of antivirus heuristics is improved over time based on the feedback end users and developers provide to antivirus software companies. This feedback is used to whitelist or blacklist certain files. By combining this information with up-to-date virus definitions, antivirus software can produce less false positives, yet still catch actual viruses. Updated October 31, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/virus_definition Copy

Visual Basic

Visual Basic Visual Basic is a programming language and development environment created by Microsoft. It is an extension of the BASIC programming language that combines BASIC functions and commands with visual controls. Visual Basic provides a graphical user interface GUI that allows the developer to drag and drop objects into the program as well as manually write program code. Visual Basic, also referred to as "VB," is designed to make software development easy and efficient, while still being powerful enough to create advanced programs. For example, the Visual Basic language is designed to be "human readable," which means the source code can be understood without requiring lots of comments. The Visual Basic program also includes features like "IntelliSense" and "Code Snippets," which automatically generate code for visual objects added by the programmer. Another feature, called "AutoCorrect," can debug the code while the program is running. Programs created with Visual Basic can be designed to run on Windows, on the Web, within Office applications, or on mobile devices. Visual Studio, the most comprehensive VB development environment, or IDE, can be used to create programs for all these mediums. Visual Studio .NET provides development tools to create programs based on the .NET framework, such as ASP.NET applications, which are often deployed on the Web. Finally, Visual Basic is available as a streamlined application that is used primarily by beginning developers and for educational purposes. Updated November 5, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/visualbasic Copy

VLAN

VLAN Stands for "Virtual Local Area Network," or "Virtual LAN." A VLAN is a custom network created from one or more existing LANs. It enables groups of devices from multiple networks (both wired and wireless) to be combined into a single logical network. The result is a virtual LAN that can be administered like a physical local area network. In order to create a virtual LAN, the network equipment, such as routers and switches must support VLAN configuration. The hardware is typically configured using a software admin tool that allows the network administrator to customize the virtual network. The admin software can be used to assign individual ports or groups of ports on a switch to a specific VLAN. For example, ports 1-12 on switch #1 and ports 13-24 on switch #2 could be assigned to the same VLAN. Say a company has three divisions within a single building — finance, marketing, and development. Even if these groups are spread across several locations, VLANs can be configured for each one. For instance, each member of the finance team could be assigned to the "finance" network, which would not be accessible by the marketing or development teams. This type of configuration limits unnecessary access to confidential information and provides added security within a local area network. VLAN Protocols Since traffic from multiple VLANs may travel over the same physical network, the data must be mapped to a specific network. This is done using a VLAN protocol, such as IEEE 802.1Q, Cisco's ISL, or 3Com's VLT. Most modern VLANs use the IEEE 802.1Q protocol, which inserts an additional header or "tag" into each Ethernet frame. This tag identifies the VLAN to which the sending device belongs, preventing data from being routed to systems outside the virtual network. Data is sent between switches using a physical link called a "trunk" that connects the switches together. Trunking must be enabled in order for one switch to pass VLAN information to another. 4,904 VLANs can be created within an Ethernet network using the 802.1Q protocol, but in most network configurations only a few VLANs are needed. Wireless devices can be included in a VLAN, but they must be routed through a wireless router that is connected to the LAN. Updated November 23, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vlan Copy

VLB

VLB Stands for "VESA Local Bus." VLB, also abbreviated as VL-bus, is an expansion bus on some computer motherboards. A VLB interface adds an expansion slot for a video card to accelerate graphics processing. A VLB expansion slot consists of an ISA slot paired with an additional slot in line behind it. It offered memory bandwidth of up to 200 MB/s when used with a 50 MHz 486 processor, compared to a maximum of 16 MB/s over the ISA bus. The VLB interface was created in 1992 and was often included on motherboards designed for the Intel 486 processor. The ISA bus standard on computers of the time had insufficient bandwidth for graphics cards, which were becoming increasingly popular with the rise of graphical user interfaces. The Video Electronics Standards Association (VESA) designed the VLB standard to extend the ISA bus with an additional connector for higher memory bandwidth for more advanced graphics cards. The VLB interface was available on computer motherboards for only a few years. Since the VLB interface was designed for the 486 processor generation, it was deprecated following the introduction of the Intel Pentium processor. Chipsets designed for the Pentium processor included the PCI bus, offering expansion slots with faster speeds than ISA slots at a much smaller size than VLB slots. Updated October 21, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vlb Copy

VLE

VLE Stands for "Virtual Learning Environment." A VLE is a virtual classroom that allows teachers and students to communicate with each other online. Class information, learning materials, and assignments are typically provided via the Web. Students can log in to the class website to view this information and may also download assignments and required reading materials to their computers. Some VLEs even allow assignments and tests to be completed online. In a virtual classroom, the teacher may communicate with the students in real-time using video or Web conferencing. This type of communication is typically used for giving lectures and for question and answer sessions. If the teacher only needs to send out a homework assignment, he or she can simply post a bulletin on the class website. The students may also receive an e-mail notification letting them know a new assignment has been posted. If class members have questions about the homework, they can participate in online forums or submit individual questions to the teacher. Virtual learning environments are a popular method of e-learning, which refers to learning through electronic means. While a VLE cannot fully replace the traditional classroom, it can be a useful way of teaching students who reside in many different locations. Updated July 23, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vle Copy

Vlog

Vlog Vlog is short for "video blog" and is pronounced "vlog" (one syllable). A vlog is a blog, or web log, that includes video clips. It may be entirely video-based or may include both video and written commentary. Several types of vlogs are available on the Web, including instructional videos, travel updates, and personal commentaries. People who create vlogs are known as "vloggers." Some vloggers post videos for fun, while others run vlogs for the purpose of generating revenue through advertisements. While it's possible to set up a vlog website, many vloggers post their vlogs on YouTube since it makes their videos easier to find. Additionally, YouTube offers free video hosting, which means vloggers can post unlimited videos without paying web hosting fees. In order to create a vlog, all you need is a video camera, an Internet connection, and a good idea. While a simple cell phone video camera can get the job done, a standalone HD video camera will produce much higher quality videos. You can publish videos as often as you like, though if you decide to maintain a blog, it helps to post them at consistent intervals, such as once a day or once a week. This helps your viewers know when new videos will be available, which makes them more likely to continue visiting your vlog. Updated August 31, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vlog Copy

VoIP

VoIP Stands for "Voice over Internet Protocol." VoIP technology allows people to place voice calls over the Internet. It uses the Internet Protocol (IP) and other standard protocols to send voice communication digitally through data packets over the Internet, just like an email message or file transfer, instead of as an analog signal over telephone lines. VoIP is also known as IP telephony or Internet telephony. VoIP is far more versatile than a traditional POTS telephone system. You can use dedicated VoIP telephones that look like regular analog office telephones, but connect to a computer over USB or a local network using ethernet. You can also make calls using VoIP software on a computer using a headset or a microphone/speaker setup. Mobile VoIP apps let you use a VoIP service on a smartphone alongside its mobile phone service. You can even connect an analog telephone to a VoIP service using a VoIP adapter. A Cisco IP telephone designed for business VoIP systems Some VoIP providers limit calls to within their own network, while others allow their customers to call anyone with a standard telephone number (including international numbers). VoIP services are ideal for businesses that need a large number of voice lines in sales offices and call centers. Many ISPs offer home VoIP services as part of a bundle with home internet and television, providing a modem or router that you can plug a telephone into directly without any extra configuration or software. Advantages and Drawbacks VoIP has several advantages over an analog telephone system, and several drawbacks as well. First, call quality is improved because a digital system does not need to compress audio for analog transmission. Voice calls can be encrypted using the same forms of encryption used by other Internet traffic. VoIP implemented via software can include many of the same features as an advanced POTS system, like call transfering and forwarding, automatic call recording, and auto-attendants, but with easier configuration. Unlike analog telephones, VoIP often does not work in the event of power loss due to its reliance on other network equipment like a modem and router. More importantly, calling emergency services from a VoIP number does not provide the same information to responders. Emergency calls are associated with the address on file with the VoIP provider, which is not necessarily the location that the call is coming from. Updated July 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/voip Copy

Volatile Memory

Volatile Memory Volatile memory is memory that requires electric current to retain data. When the power is turned off, all data is erased. Volatile memory is often contrasted with non-volatile memory, which does not require power to maintain the data storage state. The most common type of volatile memory is random-access memory, or RAM. Computers and other electronic devices use RAM for high-speed data access. The read/write speed of RAM is typically several times faster than a mass storage device, such as a hard disk or SSD. When a computer boots up, it loads the operating system into RAM. Similarly, when you open an application on your computer or mobile device, it is loaded into RAM. Loading the operating system and active applications into RAM allows them to run much faster. Since RAM is volatile memory, all data stored in RAM is lost when the host device is turned off or restarted. The operating system must be loaded into RAM again when the device is turned on. While this requires extra processing time during startup, the "reset" that non-volatile memory provides is an effective way to remove lingering issues that may occur while a computer is running. This is why restarting a computer or electronic device is an effective way to fix common problems. System RAM is the most common type of volatile memory, but several other types exist. Below are some examples of volatile memory: System RAM (DRAM) Video RAM (VRAM) Processor L1 and L2 cache HDD and SSD disk cache NOTE: The "volatile" aspect of the term "volatile memory" refers to how data is lost when the power is turned off. It does not refer to the voltage required to maintain the data. Updated October 18, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/volatile_memory Copy

Volume

Volume The word "volume" has several different meanings. In physics, it measures both sound and three-dimensional space. Audio volume defines the intensity of soundwaves, or how loud a sound is. Spacial volume describes how much space a three-dimensional object takes up. In the computer world, "volume" has an entirely different definition related to data storage. A computer volume is similar to a book volume, or a single book in a series. It refers to a specific storage area, such as an HDD or SSD. It may also be a section within a storage device. Before a computer can access data from a volume, it must mount it first. Once a volume is mounted, the computer may read and write data, depending on the volume's permissions. Unmounting the volume prevents it from being accessed. While the terms "volume" and "storage device" are often used interchangeably, they do not have a one-to-one relationship. For example, several storage devices can be linked together in a single volume. Alternatively, a single storage device may be split into multiple volumes. Containers vs Volumes vs Partitions Some file systems, such as NTFS, use volumes as the top level of data storage. Others, such as Apple's APFS, use "containers." A single disk may have multiple containers. A container can store more than one volume and a volume may have multiple partitions. It is also possible for a storage device to have one container with a single volume that has one partition. While a volume may contain multiple partitions, most file systems create separate volumes for each partition. Windows, for example, creates a new volume for each partition, labeling each volume with a letter, such as C:, D:, E:, etc. These volumes are also called "drives," even though they may not be separate physical drives. NOTE: Since there is often a one-to-one relationship between volumes and partitions, these terms are also used interchangeably. However, a partition is technically part of a volume. For example, formatting a volume will erase all partitions contained in that volume. Updated August 11, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/volume Copy

Voxel

Voxel A voxel is a volumetric, three-dimensional pixel. A single voxel is a cube-shaped particle that represents a point in 3D space. A group of voxels, arranged in a 3D grid, combine to create a 3D shape in a way similar to how pixels combine to create a 2D image. As a method of creating three-dimensional graphics, voxels are a less-common alternative to 3D wireframes and polygons. Instead of building a 3D object out of vertices connected by lines and flat polygons, an object made of voxels consists of uniformly-shaped blocks attached to each other. Each voxel can have its own properties like color, transparency, and texture. An object made of voxels can be hollow, or it can include voxels inside it with their own properties; for example, a 3D model of an avocado made out of polygons would be a hollow avocado-shaped object, while one made of voxels would have an outer skin, an inside layer, and a pit in the middle. While polygon-based 3D graphics are far more common in video games than voxels, games like Minecraft often use voxels for procedural world generation and destructive terrain effects. Scientific and medical imaging also frequently use voxel-based 3D rendering to represent objects using 3D cross-sections. For example, doctors can view a 3D medical imaging scan of a diseased organ from any angle, including a cross-section to see the problem areas inside. Updated March 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/voxel Copy

VPI

VPI Stands for "Virtual Path Identifier." A VPI is an 8- to 12-bit header in an Asynchronous Transfer Mode (ATM) cell that identifies the path the cell should take to its destination. An ATM cell's VPI works in conjunction with its Virtual Channel Identifier (VCI) to guide a cell as it passes through multiple ATM switches. The size of a VPI depends on the size and scale of the ATM network; small networks use an 8-bit VPI that supports hundreds of paths, while large networks use a 12-bit VPI that supports several thousand unique paths. ATM is a networking technology that creates virtual paths for data cells to travel across to their destination. Unlike IP routing, where a packet in a data stream can take one of many possible routes, ATM cells in a data stream all take the same route. The VPI information in each cell's header identifies the specific virtual path the cell should take. While ATM was designed to be more efficient at routing network traffic, in practice ATM networks were quite complex. The small size of each data cell, combined with the data taken up by VPI and VCI headers, resulted in significant overhead. Ethernet and IP-based networking technologies were simpler and scaled better to high traffic, eventually taking over the role that ATM networking had filled. Updated August 10, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vpi Copy

VPN

VPN Stands for "Virtual Private Network." A VPN is a service that provides secure, anonymous Internet access. It "tunnels" your Internet connection from your ISP through one or more VPN servers, which hides your actual IP address and location. Besides obfuscating your IP address, most VPNs encrypt your connection to provide extra security. The encryption encodes all data you send and receive, ensuring no one can eavesdrop on your Internet activity. While fast VPNs can process and encrypt data in near real-time, it still requires an extra step, which means it may affect your Internet speed. Most VPNs work with both desktop and mobile devices. Some VPN services are free to use, but the highest-quality ones charge a monthly fee. Generally, paid VPNs are faster and more reliable than free ones. Examples of VPN providers include: NordVPN ExpressVPN CyberGhost Surfshark ProtonVPN NOTE: Businesses may also create private VPNs that employees must use to remotely access sensitive data. Company-specific VPNs typically require authentication, encrypt all connections, and limit the Internet locations users can access. Updated February 22, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vpn Copy

VPS

VPS Stands for "Virtual Private Server." A VPS is a server created using software virtualization. It functions like a physical server, but it is a virtualized instance created within a server. A single physical machine can host multiple virtual private servers. A cloud-based VPS may be hosted across multiple servers. The most common type of VPS is a web host. Many web hosting companies offer VPS hosting solutions as an alternative to shared hosting and dedicated hosting. A VPS sits in between the two options, usually in both performance and price. Like a shared host, a VPS may share the resources of a physical machine with other hosting accounts. However, a VPS is custom-configureable like a dedicated hosting solution it is isolated ("private") from other accounts. Both single-machine and cloud-based VPSes are managed using a software program called a hypervisor. The machine that runs the hypervisor is called the host machine and the individual virtual private servers are called guest machines or guest instances. The hypervisor can start and stop the virtual machines and allocates system resources, such as CPU, memory, and disk storage to each VPS. Virtual private servers have become a popular choice for web hosting because they offer many benefits of dedicated servers at a lower cost. They also provide the added benefit of easy scalability. Since each VPS is virtualized, the configuration can be updated with a software modification rather than a hardware upgrade. Still, dedicated servers often provide better performance since all the resources of the physical machine are dedicated to a single server. Updated July 5, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/vps Copy

VRAM

VRAM Stands for "Video Random Access Memory." VRAM is dedicated, high-speed memory used by a computer's GPU. It temporarily stores data for the GPU to access, including textures, shaders, and the frame buffer. VRAM is connected directly to the GPU on the graphics card, which allows the GPU to access VRAM at higher speeds and with less latency than system RAM; however, being integrated means that VRAM cannot be upgraded without replacing the entire graphics card. Some computer systems with integrated GPUs use a single pool of memory shared between the CPU and GPU. This is known as a Unified Memory Architecture. Different GPU and video-related tasks use VRAM in their own ways. Video games and 3D modeling/animation applications use VRAM to store texture and polygon information during image rendering. Video editing applications use VRAM to store video frame data, allowing editors to composite multiple video layers and render motion graphics and special effects. Graphic design and CAD applications typically have lower requirements than 3D and video applications but still benefit from enough fast VRAM to create a smooth and responsive frame buffer, particularly when using multiple high-resolution monitors. VRAM chips on an NVIDIA graphics card Modern GPUs use a special type of memory for VRAM called Graphics Double Data Rate Synchronous Dynamic Random Access Memory, or GDDR SDRAM. Each generation of GDDR SDRAM increases the available memory bandwidth to allow higher data transfer rates. The current standard (as of 2023) is GDDR6, which supports data transfer rates between 112 GB/s and 144 GB/s. NOTE: The acronym VRAM was formerly used specifically for dual-ported video RAM, which was used for frame buffers in video cards in the 1990s and is now obsolete. The term now refers broadly to dedicated video memory on a graphics card. Updated June 21, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vram Copy

VRML

VRML Stands for "Virtual Reality Modeling Language." VRML is a modeling language designed to deliver interactive 3D scenes over the World Wide Web. VRML files are similar to HTML files, which use text and markup language to build a webpage. VRML files use text and markup language to create a 3D scene (or "world") and the objects within it, define how the user interacts with it, and make hyperlinks to other worlds. A web user can view VRML worlds through a dedicated VRML browser or a web browser using a VRML plug-in. A VRML file (with a .wrl file extension) is a plain text file that defines the scene and objects within it using a standard markup language. 3D objects are created by specifying individual points along the X, Y, and Z axes, then connecting them to create a wireframe, and finally using those wireframes to create surfaces and apply textures. VRML worlds can also include animation, sounds, lights, and interactive elements. The language is simple enough that someone with a sufficient understanding of 3D graphics can write a VRML file in a plain text editor. The first version of the VRML specification was released in 1994, supporting static 3D worlds. A second version, VRML97, was released in 1997, adding support for animation and interaction through a scripting language. The format never found a large audience due to the limited bandwidth of dial-up Internet that most people used at the time. An XML-based successor language, X3D, was released in 2001. Updated October 19, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vrml Copy

Vulkan

Vulkan Vulkan is an application programming interface (API) designed for rendering 3D graphics in applications and video games. It gives developers more direct access to a computer's GPU than older APIs, which can improve efficiency and 3D performance. Vulkan is a cross-platform API supported by computers and mobile devices. It was developed by the Khronos Group, an industry group focused on open standards for 3D graphics, and was first introduced in 2016. Like other 3D graphics APIs, Vulkan helps game developers tell a computer's GPU how to draw and animate 3D models and environments. It provides a standard way to send commands to the GPU, like how to build a 3D shape out of polygons, map textures to it, and combine shapes into a 3D environment. However, unlike older APIs, Vulkan is a low-level API that provides developers direct access to the GPU hardware. High-level APIs, like OpenGL, provide a library of functions that games can call on to manipulate 3D objects. This creates a layer of abstraction between game software and the GPU through a driver and requires the CPU to translate API calls into GPU instructions. Vulkan removes most of that abstraction by providing more direct access to the GPU and VRAM. Game developers have to write or import more functions into their source code, making it more complicated, but this allows them to optimize their games for better 3D performance while reducing the work needed from the CPU. 3D video games like Baldur's Gate III use the Vulkan API to control the GPU and render graphics Since it is cross-platform, Vulkan allows game developers to use the same graphics code in different versions of their game. Windows, Linux, and Android support the Vulkan API directly, while macOS and iOS support it through a wrapper that translates Vulkan commands to Apple's Metal API. Updated September 8, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/vulkan Copy

W3C

W3C Stands for "World Wide Web Consortium." The W3C is an international community that includes a full-time staff, industry experts, and several member organizations. These groups work together to develop standards for the World Wide Web. The mission of the W3C is to lead the Web to its full potential by developing relevant protocols and guidelines. This is achieved primarily by creating and publishing Web standards. By adopting the Web standards created by the W3C, hardware manufacturers and software developers can ensure their equipment and programs work with the latest Web technologies. For example, most Web browsers incorporate several W3C standards, which allows them to interpret the latest versions of HTML and CSS code. When browsers conform to the W3C standards, it also helps Web pages appear consistent across different browsers. Besides HTML and CSS standards, the W3C also provides standards for Web graphics (such as PNG images), as well as audio and video on the Web. The organization also develops standards for Web applications, Web scripting, and dynamic content. Additionally, the W3C provides privacy and security guidelines that websites should follow. The World Wide Web Consortium has played a major role in the development of the Web since it was founded in 1994. As Web technologies continue to evolve, the W3C continues to publish new standards. For example, many of the technologies included in Web 2.0 websites are based on standards developed by the W3C. To learn more about the W3C and the current standards published by the organization, visit the W3C website. Updated May 6, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/w3c Copy

WAIS

WAIS Stands for "Wide Area Information Server." WAIS (pronounced "ways") was a client-server database search system introduced in 1990 before the widespread adoption of the World Wide Web. It indexed the contents of databases located on multiple servers and made them searchable over networks, including the Internet. Text searches of the databases were ranked by relevance, where files with more keyword hits are listed first. As the web grew in popularity, WAIS usage declined and eventually faded into obsolescence. The service has now been completely replaced by modern search engines. How WAIS Functioned A central server called the Directory of Servers stored and indexed a database of other publicly-available databases. Since data storage was limited by the technology of the day, databases were distributed amongst several servers. To conduct a WAIS search, one would connect via telnet to a server running a WAIS client or install and run their own client. Once connected to the Directory of Servers, a WAIS user could browse a large number of distributed databases, each focused on a specific topic (usually an academic discipline, such as history, biology, or social sciences). WAIS was also popular as a way to search for text within individual Gopher servers. Modern search engines as a way to quickly find relevant information were inspired by WAIS. As the World Wide Web caught on and supplanted Gopher servers, WAIS declined in popularity. Updated October 3, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wais Copy

Wallpaper

Wallpaper A wallpaper, or desktop background, is a background image that covers the desktop of a computer or other device. Most operating systems provide a default wallpaper and allow you to choose your own. Wallpapers can be any type of image, but most users prefer a clean background that is not too "busy." The simpler the wallpaper image is, the easier it is to view icons and windows on the desktop. Some people prefer realistic backgrounds, while others opt for computer-generated images. Recent versions of macOS have used digital photos that correspond with the names of each OS version as the default wallpaper. Examples include Yosemite National Park, the Sierra Nevada mountains, and Catalina Island. Conversely, the default wallpapers for Windows 8 and Windows 10 are computer-generated images. Besides choosing a wallpaper style, it helps to choose an image that matches the resolution of your display. For example, if you select an HD (1920x1080) pi for a 4K display, it will appear blurry. Additionally, if the aspect ratio does not match your display, part of the image may get cut off when applied as a wallpaper on your device. NOTE: A wallpaper is different than a screen saver, which is an animation that appears when a device is idle. Updated September 10, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wallpaper Copy

WAMP

WAMP Stands for "Windows, Apache, MySQL, and PHP." WAMP is a variation of LAMP for Windows systems and is often installed as a software bundle (Apache, MySQL, and PHP). It is often used for web development and internal testing, but may also be used to serve live websites. The most important part of the WAMP package is Apache (or "Apache HTTP Server") which is used run the web server within Windows. By running a local Apache web server on a Windows machine, a web developer can test webpages in a web browser without publishing them live on the Internet. WAMP also includes MySQL and PHP, which are two of the most common technologies used for creating dynamic websites. MySQL is a high-speed database, while PHP is a scripting language that can be used to access data from the database. By installing these two components locally, a developer can build and test a dynamic website before publishing it to a public web server. While Apache, MySQL, and PHP are open source components that can be installed individually, they are usually installed together. One popular package is called "WampServer," which provides a user-friendly way to install and configure the "AMP" components on Windows. NOTE: The "P" in WAMP can also stand for either Perl or Python, which are other scripting languages. The Mac version of LAMP is known as MAMP. Updated May 23, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wamp Copy

WAN

WAN Stands for "Wide Area Network." A WAN is a computer network that spans a wide geographic area. It connects local networks from different locations together to serve as a network of smaller networks. Businesses, schools, and other organizations maintain their own WANs to connect devices and share data across multiple sites. The Internet is the most prominent example of a WAN, connecting computers and networks worldwide. Each location within a WAN maintains an internal LAN, with a router providing the link between the internal and external connections. The connections between LANs in a WAN can take multiple forms. One method to connect networks across a WAN is by leased lines. These are data lines provided by an ISP, dedicated to a particular WAN, and not accessible to other networks. Leased lines are fast and private but expensive. Other WANs operate over regular Internet connections, maintaining privacy by tunneling the data connections through a VPN. Some WANs will use a leased line as a primary connection, with tunneling over the Internet available as a backup. For example, school districts will often use a WAN to connect the internal LANs of multiple school buildings and district offices spread out in several locations within a city. Since the WAN functions as a single network, internal resources are available from any school. A substitute teacher can log in from a new classroom each day and still access what they need. Sensitive student information sent across the WAN through a leased line is kept private from outside snooping. Finally, a teacher can access the WAN to grade papers from home by connecting through the district's VPN. Updated October 10, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wan Copy

Wardriving

Wardriving Wardriving is the act of searching for Wi-Fi networks from a moving vehicle. It involves slowly driving around an area with the goal of locating Wi-Fi signals. This may be accomplished by an individual or by two or more people, with one person driving and others searching for wireless networks. Wardriving may be as simple as searching for free Wi-Fi using a smartphone inside an automobile. However, the definition usually applies to a hardware and software configuration specifically designed for locating and recording Wi-Fi networks. Wardriving equipment typically includes: A car or other automobile A laptop A Wi-Fi antenna A GPS device Wardriving software A wardriver can use the items listed above to locate all the Wi-Fi signals in a specific area. The laptop runs the wardriving software, which communicates with both the GPS and Wi-Fi hardware. The GPS receiver records the current location as the car is moving, while the Wi-Fi transceiver detects signals of wireless networks present in each location. The antenna extends the range of the signal detection compared to a typical laptop. Wardriving software may record the location, signal strength, and the status of each network found (for example, if it is open or encrypted, and what type of encryption the network uses). The goal of wardriving may be to find a single usable Wi-Fi network or it may be to map all Wi-Fi signals within a specific area. The latter is also called "access point mapping." While the act of wardriving itself may not be malicious, the data can be used to publicize and/or exploit open or unsecure networks. It is a good reminder to secure your own wireless network with a strong password so it cannot be accessed by strangers. NOTE: The term "wardriving" comes from "wardialing," a systematic method of dialing phone numbers in search of modems popularized in the movie WarGames. "Warbiking," "warwalking," and "warrailing" are variations of wardriving. Updated April 14, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wardriving Copy

Warm Boot

Warm Boot A warm boot (also called a "soft boot") is the process of restarting a computer. It may be used in contrast to a cold boot, which refers to starting up a computer that has been turned off. Warm boots are typically initiated by a "Restart" command in the operating system. For example, to perform a warm boot on a Windows system, select Shut Down → Restart from the Start Menu. If you use a Mac, you can perform a warm boot by selecting Restart… from the Apple Menu. Warm booting (restarting a computer) is more common than cold booting since most people leave their computers in sleep mode when they are not using them. While a home computer may not need to be turned off for months, it may need to be restarted every few days or weeks to complete new software installations. There should be no difference in the way a computer performs after starting from a warm boot vs a cold boot. In both cases, the computer must complete the full boot sequence. This process loads all system files, including any that were installed immediately before the computer was restarted. The only difference is that some machines may perform a more complete power-on self-test (POST) at the beginning of a cold boot. Updated August 23, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/warm_boot Copy

WAVE

WAVE WAVE is an audio format used to store sound data. It is is similar to the AIFF format, but is based on the Resource Interchange File Format (RIFF), a container format designed to store data in "chunks." Therefore, each WAVE file contains one or more chunks that store audio data or other information. A typical WAVE file includes a format ("fmt") chunk and a data chunk. The format chunk describes how the waveform data is stored and specifies the number of channels, sampling rate, and compression type. The data chunk contains the actual audio data and is separated into channels (such as left and right for a stereo audio recording). Optional chunks include the silent ("slnt") chunk, which defines silent parts of the audio and the wave list ("wavl") chunk, which specifies the locations of silent sections. WAVE files may also include several other optional chunks, such as cue chunk, which marks specific points in the audio, and the info chunk, which is used to store metadata that describes the file. The WAVE format supports several types of media compression, including Microsoft ADPCM, Yamaha ADPCM, a-law, µ-law, GSM, and MPEG compression. However, most WAVE files store data in an uncompressed (PCM) format. For example, most Windows programs that rip audio from CDs export the data as uncompressed WAVE files. These files maintain the original CD-quality audio, which includes two audio channels, a sample size of 16 bits, and a sampling rate of 44.1 KHz. This is identical to the uncompressed AIFF files exported by most Macintosh audio ripping utilities. Since the WAVE and AIFF formats are so similar, most DAW applications and audio programs can read and write both formats. NOTE: While WAVE files contain binary data, you can tell if an audio file is saved in the WAVE format by dragging it to a text editor. The first several characters should read something similar to "RIFF WAVEfmt." File extensions: .WAV, .WAVE Updated March 1, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wave Copy

Waveform

Waveform A waveform is a visual representation of an audio or electrical signal that charts the amplitude, or strength of a signal, over time. Audio editors, such as DAWs, display waveforms for recorded audio tracks to provide a snapshot of each recording. Electrical monitoring equipment uses waveforms to visualize voltages or currents in electrical circuits. The most basic form of an audio wave is a sine wave, which oscillates smoothly at a consistent frequency. Other examples of generated waves include square waves (which instantly switch between two values), triangle waves (whose amplitude rises and falls at a constant rate), and sawtooth waves (where the amplitude either slowly rises then abruptly falls, or vice versa). Recorded audio waves are far more complex, since multiple sounds added together overlap, creating complex waves that don't oscillate smoothly or at a consistent frequency. From left to right — a basic sine wave, a square wave, a triangle wave, and a sawtooth wave You can get a lot of information about an audio signal by viewing its waveform. The amplitude reflects how powerful the sound is — a short waveform is quiet, while a tall waveform is loud. The frequency at which the signal oscillates between maximum and minimum values also affects the sound. The faster the frequency, the higher-pitched the sound. Noticeable changes in a waveform are also indicators of when certain parts of a recording take place. For example, the waveform of a musical performance may start small when a single vocalist is singing, then get larger when instruments join in. The waveform often peaks when percussive notes, such as drums, are played. This visual representation enables audio editors to locate certain parts of a song without even listening to the recording. NOTE: Signals can also be viewed as waveforms plotted against frequency, rather than time. Waveforms representing the frequency domain — also called "frequency spectrums" — show how much of a signal's energy is present at different frequencies at a given point in time. Updated December 23, 2022 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/waveform Copy

Wavelength

Wavelength Wavelength is the distance between two identical adjacent points in a wave. It is typically measured between two easily identifiable points, such as two adjacent crests or troughs in a waveform. While wavelengths can be calculated for many types of waves, they are most accurately measured in sinusoidal waves, which have a smooth and repetitive oscillation. Wavelength is inversely proportional to frequency. That means if two waves are traveling at the same speed, the wave with a higher frequency will have a shorter wavelength. Likewise, if one wave has a longer wavelength than another wave, it will also have a lower frequency if both waves are traveling at the same speed. The following formula can be used to determine wavelength: λ = v / ƒ The lowercase version of the Greek letter "lambda" (λ) is the standard symbol used to represent wavelength in physics and mathematics. The letter "v" represents velocity and "ƒ" represents frequency. Since the speed of sound is roughly 343 meters per second at 68° F (20° C), 343 m/s can be substituted for "v" when measuring the wavelength of sound waves. Therefore, only the frequency is needed to determine the wavelength of a sound wave at 68° F. The note A4 (the A key above middle C) has a frequency of 440 hertz. Therefore, the wavelength of an A4 sound wave at 68° F is 343 m/s / 440 hz, which equals 0.7795 meters, or 77.95 cm. Waves in the electromagnetic spectrum, such as radio waves and light waves, have much shorter wavelengths than sound waves. Therefore, these wavelengths are typically measured in millimeters or nanometers, rather than centimeters or meters. Updated January 5, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wavelength Copy

WDDM

WDDM Stands for "Windows Display Driver Model." WDDM is a display driver architecture introduced with Windows Vista. It improves graphics performance over the previous Windows XP architecture by more fully utilizing a computer's GPU to render system graphics. When Microsoft released the Vista operating system at the beginning of 2007, dedicated graphics processors (or GPUs) had become standard computer components. Additionally, GPU performance was increasing by a much faster rate than CPUs. Therefore, software developers needed an efficient way to "offload" as much graphics processing from the CPU to the GPU as possible. This lead Microsoft to rewrite the graphics driver system during the development of Windows Vista. The more efficient WDDM graphics architecture was the result. One of the biggest benefits of the WDDM architecture is that it supports GPU multitasking. This allows Windows users to run multiple graphics intensive applications at the same time. It also simplifies graphics programming, making it easier for 3D game developers to take fully advantage of a system's GPU. Additionally, the WDDM offers improved stability by detecting when the driver hangs and restarting the display driver instead of requiring a full system restart. While the WDDM architecture was introduced with Windows Vista, it is also used by newer versions of the Windows operating systems, such as Windows 7 and Windows 8. Windows 7 includes WDDM 1.1, while Windows 8 includes WDDM 1.2. These updates include improvements designed to support new graphics processing capabilities offered by modern GPUs. NOTE: WDDM may also be referred to as WVDDM or "Windows Vista Display Driver Model." However, this term is not commonly used since WDDM is now used by newer versions of Windows. Updated April 3, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wddm Copy

Web 2.0

Web 2.0 Web 2.0 is a term that was introduced in 2004 and refers to the second generation of the World Wide Web. The term "2.0" comes from the software industry, where new versions of software programs are labeled with an incremental version number. Like software, the new generation of the Web includes new features and functionality that was not available in the past. However, Web 2.0 does not refer to a specific version of the Web, but rather a series of technological improvements. Some examples of features considered to be part of Web 2.0 are listed below: Blogs - also known as Web logs, these allow users to post thoughts and updates about their life on the Web. Wikis - sites like Wikipedia and others enable users from around the world to add and update online content. Social networking - sites like Facebook and MySpace allow users to build and customize their own profiles and communicate with friends. Web applications - a broad range of new applications make it possible for users to run programs directly in a Web browser. Web 2.0 technologies provide a level user interaction that was not available before. Websites have become much more dynamic and interconnected, producing "online communities" and making it even easier to share information on the Web. Because most Web 2.0 features are offered as free services, sites like Wikipedia and Facebook have grown at amazingly fast rates. As the sites continue to grow, more features are added, building off the technologies in place. So, while Web 2.0 may be a static label given to the new era of the Web, the actual technology continues to evolve and change. Updated January 14, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web20 Copy

Web Application

Web Application A web application or "web app" is a software program that runs on a web server. Unlike traditional desktop applications, which are launched by your operating system, web apps must be accessed through a web browser. Web apps have several advantages over desktop applications. Since they run inside web browsers, developers do not need to develop web apps for multiple platforms. For example, a single application that runs in Chrome will work on both Windows and OS X. Developers do not need to distribute software updates to users when the web app is updated. By updating the application on the server, all users have access to the updated version. From a user standpoint, a web app may provide a more consistent user interface across multiple platforms because the appearance is dependent on the browser rather than the operating system. Additionally, the data you enter into a web app is processed and saved remotely. This allows you to access the same data from multiple devices, rather than transferring files between computer systems. While web applications offer several benefits, they do have some disadvantages compared to desktop applications. Since they do not run directly from the operating system, they have limited access to system resources, such as the CPU, memory, and the file system. Therefore, high-end programs, such as video production and other media apps generally perform better as desktop applications. Web apps are also entirely dependent on the web browser. If your browser crashes, for example, you may lose your unsaved progress. Also, browser updates may cause incompatibilities with web apps, creating unexpected issues. Some people prefer desktop apps, while others prefer web applications. Therefore, many software companies now offer both desktop and web versions of their most popular programs. Common examples include Microsoft Office, Apple iWork, and Intuit TurboTax. In most cases, files saved in the online version are compatible with the desktop version and vice versa. For example, if you save a .TAX2013 file in TurboTax Online, you can open and edit the file with the desktop version. Updated February 17, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_application Copy

Web Beacon

Web Beacon A web beacon is a small image file — usually a transparent 1x1 pixel image — used for tracking purposes. It may be placed in a webpage or HTML email to record when the content was loaded. How do web beacons work? Each image referenced within a webpage is retrieved from a server. When the image is loaded, the server records a "hit," which is typically saved in the server log. A log file on the server can be used to determine when the specific image was accessed and also the IP address that accessed the file. The filename of a web beacon may include a question mark (?) followed by an identifying string, such as: beacon.png?user1234. The web browser will ignore everything after the ".png" extension, but the entire string will be recorded as a hit by the web server. This information can be used to determine when a specific web beacon is accessed. While any image can be used as a web beacon, small transparent GIFs or PNGs are common since they can be placed unobtrusively on a page. They may also be used by third-party tracking tools that are not accessed from the main web server. Examples include analytics code like Google Analytics and affiliate links provided by other companies. An affiliate link, for example, may include a web beacon before or after the link. The beacon allows the publisher to record the number of impressions (or number times the link is displayed), which is not possible with a plain text link. Web beacons in email marketing Web beacons are commonly used in email marketing to track the number of users who open and view emails. For instance, when you view a promotional email, a web beacon hidden in the email may record that you have opened it. This helps email marketers know which campaigns are effective. Unfortunately, email beacons may also be used for spam purposes, as they can record valid email addresses. For this reason, many email clients and webmail interfaces do not automatically load images in emails that are likely to be spam. Updated October 4, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_beacon Copy

Web Browser

Web Browser A web browser, or simply "browser," is an application used to access and view websites. Common web browsers include Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, and Apple Safari. The primary function of a web browser is to render HTML, the code used to design or "mark up" webpages. Each time a browser loads a web page, it processes the HTML, which may include text, links, and references to images and other items, such as cascading style sheets and JavaScript functions. The browser processes these items, then renders them in the browser window. Early web browsers, such as Mosaic and Netscape Navigator, were simple applications that rendered HTML, processed form input, and supported bookmarks. As websites have evolved, so have web browser requirements. Today's browsers are far more advanced, supporting multiple types of HTML (such as XHTML and HTML 5), dynamic JavaScript, and encryption used by secure websites. The capabilities of modern web browsers allow web developers to create highly interactive websites. For example, Ajax enables a browser to dynamically update information on a webpage without the need to reload the page. Advances in CSS allow browsers to display responsive website layouts and a wide array of visual effects. Cookies allow browsers to remember your settings for specific websites. While web browser technology has come a long way since Netscape, browser compatibility issues remain a problem. Since browsers use different rendering engines, websites may not appear the same across multiple browsers. In some cases, a website may work fine in one browser, but not function properly in another. Therefore, it is smart to install multiple browsers on your computer so you can use an alternate browser if necessary. Updated February 28, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_browser Copy

Web Design

Web Design Web design is the process of creating websites. It encompasses several different aspects, including webpage layout, content production, and graphic design. While the terms web design and web development are often used interchangeably, web design is technically a subset of the broader category of web development. Websites are created using a markup language called HTML. Web designers build webpages using HTML tags that define the content and metadata of each page. The layout and appearance of the elements within a webpage are typically defined using CSS, or cascading style sheets. Therefore, most websites include a combination of HTML and CSS that defines how each page will appear in a browser. Some web designers prefer to hand code pages (typing HTML and CSS from scratch), while others use a "WYSIWYG" editor like Adobe Dreamweaver. This type of editor provides a visual interface for designing the webpage layout and the software automatically generates the corresponding HTML and CSS code. Another popular way to design websites is with a content management system like WordPress or Joomla. These services provide different website templates that can be used as a starting point for a new website. Webmasters can then add content and customize the layout using a web-based interface. While HTML and CSS are used to design the look and feel of a website, images must be created separately. Therefore, graphic design may overlap with web design, since graphic designers often create images for use on the Web. Some graphics programs like Adobe Photoshop even include a "Save for Web…" option that provides an easy way to export images in a format optimized for web publishing. Updated February 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_design Copy

Web Development

Web Development Web development refers to building, creating, and maintaining websites. It includes aspects such as web design, web publishing, web programming, and database management. While the terms "web developer" and "web designer" are often used synonymously, they do not mean the same thing. Technically, a web designer only designs website interfaces using HTML and CSS. A web developer may be involved in designing a website, but may also write web scripts in languages such as PHP and ASP. Additionally, a web developer may help maintain and update a database used by a dynamic website. Web development includes many types of web content creation. Some examples include hand coding web pages in a text editor, building a website in a program like Dreamweaver, and updating a blog via a blogging website. In recent years, content management systems like WordPress, Drupal, and Joomla have also become popular means of web development. These tools make it easy for anyone to create and edit their own website using a web-based interface. While there are several methods of creating websites, there is often a trade-off between simplicity and customization. Therefore, most large businesses do not use content management systems, but instead have a dedicated Web development team that designs and maintains the company's website(s). Small organizations and individuals are more likely to choose a solution like WordPress that provides a basic website template and simplified editing tools. NOTE: JavaScript programming is a type of web development that is generally not considered part of web design. However, a web designer may reference JavaScript libraries like jQuery to incorporate dynamic elements into a site's design. Updated February 5, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_development Copy

Web Forum

Web Forum A Web forum is a website or section of a website that allows visitors to communicate with each other by posting messages. Most forums allow anonymous visitors to view forum postings, but require you to create an account in order to post messages in the forum. When posting in a forum, you can create new topics (or "threads") or post replies within existing threads. Web forums are available for all kinds of topics. Examples include software support, help for webmasters, and programming discussions. While lots of Web forums focus on IT topics, they are not limited to information technology. There are forums related to health, fitness, cars, houses, teaching, parenting, and thousands of other topics. Some forums are general, like a fitness forum, while others are more specific, such as a forum for yoga instructors. Since Web forums are comprised of user-generated content (UGC), they continue to grow as long as users visit the site and post messages. The webmaster of a Web forum simply needs to manage the forum, which may require moving, combining, and archiving threads. It may also involve monitoring postings and removing ones that are inappropriate. While this can be a large task for popular forums, most forum software, like vBulletin and phpBB, can filter out inappropriate content. Because Web forums are constantly growing, they have become a large part of the Web. In fact, if you search for help on a certain topic, there is a good chance one or more forum pages will appear in the top results. After all, if you have a question about something, odds are you're not the only one. You can use forums to glean knowledge from others who have shared your questions in the past. Conversely, you can help others by sharing your ideas and answers in an online forum. NOTE: Web forums are also called Internet forums, discussion boards, and online bulletin boards. Updated February 17, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_forum Copy

Web Host

Web Host A web host is a service provider that stores or "hosts" website files and makes them accessible over the internet. Website data may include HTML pages, CSS documents, web scripts images, and database information. When a user visits a webpage, the corresponding web host responds to the request and sends data to the web browser, as shown below. Web hosting services range from low-cost shared plans to high-end dedicated and cloud-based solutions for high-traffic websites. Some examples include: Shared Hosting – Multiple websites share the same resources from a single machine, making it a cost-effective option for small websites. VPS Hosting – A Virtual Private Server server is divided into virtual sections, giving each website more dedicated resources. Dedicated Hosting – A single physical server is fully dedicated to one client, offering maximum performance and control. Cloud Hosting – Websites are hosted on multiple interconnected servers, providing scalability and redundacy. Managed Hosting – The web host handles server maintenance, monitoring, and updates. Managed services may be added to one of the different hosting options above. Popular web hosting providers include Liquid Web, GoDaddy, HostGator, and AWS. Website (CMS) platforms like Wix, Weebly, and Squarespace provide their own hosting services. Fun fact: TechTerms.com is hosted on a dedicated server from Liquid Web. The origin server is located in Lansing, Michigan, but most webpages are delivered from edge nodes around the world, from Cloudflare's CDN. Updated February 25, 2025 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_host Copy

Web Publishing

Web Publishing Web publishing, or "online publishing," is the process of publishing content on the Internet. It includes creating and uploading websites, updating webpages, and posting blogs online. The published content may include text, images, videos, and other types of media. In order to publish content on the web, you need three things: 1) web development software, 2) an Internet connection, and 3) a web server. The software may be a professional web design program like Dreamweaver or a simple web-based interface like WordPress. The Internet connection serves as the medium for uploading the content to the web server. Large sites may use a dedicated web host, but many smaller sites often reside on shared servers, which host multiple websites. Most blogs are published on public web servers through a free service like Blogger. Since web publishing doesn't require physical materials such as paper and ink, it costs almost nothing to publish content on the web. Therefore, anyone with the three requirements above can be a web publisher. Additionally, the audience is limitless since content posted on the web can be viewed by anyone in the world with an Internet connection. These advantages of web publishing have led to a new era of personal publishing that was not possible before. NOTE: Posting updates on social networking websites like Facebook and Twitter is generally not considered web publishing. Instead, web publishing generally refers to uploading content to unique websites. Updated December 21, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_publishing Copy

Web Server

Web Server A Web server is a computer system that hosts websites. It runs Web server software, such as Apache or Microsoft IIS, which provides access to hosted webpages over the Internet. Most Web servers are connected to the Internet via a high-speed connection, offering OC-3 or faster data transmission rates. A fast Internet connection allows Web servers to support multiple connections at one time without slowing down. Any computer can be used as a Web server, as long as it is connected to the Internet and has the appropriate software installed. However, most Web servers are 1U rack-mounted systems, meaning they are flat, trimmed down computers that can be mounted on a server rack. Most Web hosting companies have several server racks, which each contain multiple servers. This is the most space-efficient way to host a large number of websites from a single location. Web servers typically host multiple websites. Some only host a few, while others may host several hundred. Web servers that host websites for multiple users are called "shared hosts." This is the most common type of hosting solution and is used for personal sites, small business sites, and websites run by small organizations. Web servers that only host websites for a single person or company are called "dedicated hosts." These types of servers are appropriate for high-traffic websites and sites that require custom server modifications. Dedicated hosts are also more reliable than shared hosts, since there are fewer sites that can cause bottlenecks or other issues with the server. Updated February 19, 2011 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_server Copy

Web Service

Web Service A web service is an application or data source that is accessible via a standard web protocol (HTTP or HTTPS). Unlike web applications, web services are designed to communicate with other programs, rather than directly with users. While web services can provide data in a number of different formats, XML and JSON are the most common. These standard text-based formats can be easily recognized and parsed by another program that receives the data. The most common web service protocol – SOAP (Simple Object Access Protocol) – simply adds a header to each XML message before it is transferred over HTTP. Business-oriented web services may use a standard called UDDI. This formats data in a specific type of XML known as the Web Services Description Language, or WSDL. While UDDI transmits .WSDL files instead of standard .XML files, it may still use the SOAP protocol to transfer data. Most web services provide an API, or a set of functions and commands, that can be used to access the data. For example, Twitter provides an API that allows developers to access tweets from the service and receive the data in JSON format. Yelp provides an API for programmers to access information about businesses, which can be displayed directly in an app or website. Google Maps provides an API for receiving geographical data and directions from the Google Maps database. NOTE: An API is a specific set of commands and guidelines used for accessing data, while a web service is an actual service provided by an Internet-based source. Updated August 10, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/web_service Copy

Webcam

Webcam A webcam is a small digital video camera that connects to a computer to capture video and audio for online video calls. Laptop computers often include a thin built-in webcam integrated into the lid, just above the screen. Separate webcams that connect to a computer over a USB connection and clip to the top of a monitor are also widely available. The most common use for a webcam is to participate in a one-on-one or group video call. You can make social calls through popular apps like Zoom, FaceTime, or Facebook Messenger. You can join online teleconferences or even consult with your doctor through telehealth appointments. However, you can also use a webcam to record videos for playback or editing later. You can broadcast video from your webcam to people online using streaming platforms like Twitch. You can even use a webcam as a security camera, although dedicated security cameras offer better image quality and don't require a direct connection to a computer. A Logitech C920X HD Pro webcam Since online video calls are often highly compressed to save bandwidth, webcams do not need to include the same high-quality optics and lenses as dedicated digital cameras. Most webcams are limited to 1080p HD video, with a few high-end webcams capable of resolutions up to 4K. However, if you want the best image quality possible for a video call regardless, many high-end digital cameras can be configured through software to work as a webcam. Updated October 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/webcam Copy

Webhook

Webhook A webhook is an event notification transmitted via HTTP, the same protocol used for transferring webpage data. It is typically sent as a POST request, which contains data that is "posted" to a specific URL. The URL defines the location of a script, which processes the data in the POST request. Webhooks can be built into any application, including web apps, mobile apps, and desktop software apps. Specific events can be programmed to generates webhooks, or "HTTP callbacks," which are event notifications sent over HTTP. The data may be formatted in whatever way the developer chooses, though JSON and XML formatting are commonly used. The POST data, which is sent to a specific URL, is parsed by the corresponding script on a web server. The script may be written in one of many different server-side scripting languages, such as PHP, JSP, or C#. It may perform one or more actions, such as saving the data in a database, emailing the information to a specific address, or sending new data back to the source. Webhooks are used for a wide variety of purposes. Examples include notifying businesses of sales, activating and deactivating software programs, updating customer information, and informing developers of software crashes. Some websites even provide APIs that allow users to send data to a URL when specific events happen. GitHub, for example, provides a list of events that can trigger a webhook, which developers can use to track changes to projects stored in their online repository. While webhooks are an effective tool, they require an Internet connection between the data source and the web server to function. Additionally, a script must be present on the server at the destination URL and it must be able to recognize and parse the POST data. If the Internet connection or script is not available, the webhook will not work. Updated June 12, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/webhook Copy

Webmail

Webmail Webmail refers to email services that provide a web-based interface instead of (or in addition to) using a local email client application. Many webmail services are free or offer a mix of free and paid service tiers. The most common webmail providers include Gmail, Outlook.com (formerly Hotmail), Yahoo! Mail, and iCloud Mail. However, if you own your own domain, many web hosts offer a webmail interface for sending and receiving email through your domain's email addresses. Using webmail for your email account can be a lot more convenient than using an email client app. You can access your email from any computer or mobile device as long as it has an Internet connection and a web browser. There's no software to install, and setup is usually as simple as choosing a unique email address and setting a password. All activity is automatically synced to the mail server, so when you read a message on one device, it is marked as read when you check your inbox from another device later. Most webmail services also provide automatic spam filtering for your inbox and virus scanning for attachments you receive. If you would rather use an email client app like Microsoft Outlook, Apple Mail, or Mozilla Thunderbird, you can add a webmail account to an app just like any other email account. You'll be able to check your messages in the app on your computer and through a web browser on other devices. However, the convenience of webmail comes with tradeoffs. Accessing your webmail requires an active Internet connection, so if you're offline, you won't be able to view already-delivered messages. Your webmail account also relies solely on the cloud storage offered by the provider, and while some services are generous with the space they offer, storage space is not unlimited. You can download and archive messages from a webmail account through a web browser, but it's often a more cumbersome process than archiving messages using an email client. Updated September 29, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/webmail Copy

Webmaster

Webmaster A webmaster is a person responsible for managing and maintaining a website. In the early days of the web, it referred to an individual who handled every aspect of running a website; in modern usage, it instead refers to a person responsible for overseeing its general operations. The term "webmaster" comes from the role of "postmaster," referring to the individual in charge of running an email server. A webmaster maintains the web server with the website host to keep the website online. They are responsible for writing code that runs the website (both front-end languages like HTML and back-end languages like PHP or Perl) and adding content. A webmaster is also the primary point of contact for the users of a website, taking feedback from visitors and resolving the issues they bring up. The term is no longer as common as it was in the early days of the Internet, where it was not only possible but routine for a single individual to handle all aspects of running a website. The technology required to run a modern website is more complex and requires a wide array of skills. It is now more common to delegate tasks amongst several team members with more specific roles and job titles. Updated October 17, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/webmaster Copy

Webpage

Webpage Webpages are the files that make up the World Wide Web. An individual webpage is a text document written in HTML (hypertext markup language). When a web browser displays a webpage, it translates the markup language into readable content. Webpages are hosted on a web server, and the collection of webpages hosted on the same domain name make up a website. An individual webpage consists of several different elements. In addition to the HTML file itself, most webpages include a cascading style sheet (CSS) that controls how text, tables, and other elements are formatted. Style sheets may be embedded in the webpage's HTML file, or stored as a separate file on the web server. Images, video, and audio that appear on a webpage are also separate files on the web server, and are loaded with the webpage when you visit it. A typical webpage consists of an HTML file, style sheet, images, and other resources A webpage can be either static or dynamic. Static pages show the same content each time they are viewed and are saved on the web server as HTML files. Dynamic pages are instead generated on demand whenever a browser requests them. These pages are typically written in scripting languages such as PHP, Perl, ASP, or JSP. The scripts in the pages run functions on the server and return the results of those functions, like displaying some information from a database. The server returns the requested information as HTML code, so when the page gets to your browser, it can translate and display the HTML. Updated February 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/webpage Copy

Webring

Webring A webring (or "web ring") is a network of interlinked websites that share a common theme or interest. The goal is to allow users to explore related content by following links on each participating site. Visitors can navigate the ring by moving forward or backward between sites or selecting a specific site from a list. Webrings were popular in the early days of the internet, serving as a way for communities to connect and share information before the rise of search engines and social media. They were typically managed by a central site, which used a web application to maintain the ring's integrity. The central "hub" ensured the network stayed relevant by removing outdated links, adding new URLs, and sometimes featuring specific sites. This automated system kept the content within the ring fresh and helped visitors discover new, related websites. Historically, webrings could be easily identified by prominent, often colorful banners or buttons at the bottom of a page, inviting visitors to continue their journey through the ring. Today, the term "webring" is rarely used, as other internet platforms like social media, web forums, and search engines have filled the role that webrings once played. However, the idea of interconnected communities persists in other forms, such as blog networks or specialized online groups. Webrings vs Portals Webrings are similar to portals but are decentralized and community-driven. A webring is a network of interlinked sites with a shared theme, while a portal is a centralized platform that aggregates and organizes content, providing a gateway to a broad range of information or services. Updated August 23, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/webring Copy

Website

Website A website is a collection of webpages grouped together using the same domain name and operated by the same person or organization. You can access a website using a web browser by entering its URL directly or by clicking a link to it from another website. For example, techterms.com is a website, while techterms.com/definition/website is a single webpage on that website. A web server (or several web servers working together) hosts a website by storing its webpages and other resources and making it accessible to anyone through the World Wide Web. Websites are organized around a single home page, which is accessible by entering the website's root URL. From the home page, you can navigate around a website by using its navigation links or search fields. Websites are often built around a single topic or with a single purpose, like giving information about a topic or business, operating an online storefront, or providing a social networking service. Websites can provide either static or dynamic content; static pages are written and stored on a web server as HTML, while dynamic pages are generated on-demand using content in a database. Some websites serve a combination of static and dynamic content based on the particular webpage that a user is visiting. Updated January 18, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/website Copy

WEP

WEP Stands for "Wired Equivalent Privacy." WEP was a security protocol used by first- and second-generation Wi-Fi networks. It was part of the original 802.11 Wi-Fi specification, using encryption to protect data transmitted over radio waves. Due to significant security vulnerabilities, WEP was deprecated and replaced by WPA and WPA2. WEP was introduced with the original Wi-Fi specification and served as the standard method of encrypting traffic for 802.11a and 802.11b networks. It used a shared key to encrypt and decrypt data packets, known as the "WEP key," which also served as the network's password. These keys were 40- or 104-bit hexadecimal strings and, when combined with the 24-bit initialization vector, could encrypt network traffic with 64- or 128-bit encryption. The initialization vector changed with each new data packet, so the encryption key changed constantly. However, WEP was ultimately vulnerable to eavesdropping. The shared key that protected data packets was static, and using only 24 bits for the initialization vector meant that encryption keys would eventually repeat. If given enough time, eavesdroppers could intercept enough packets to decrypt the key. Within a few years of WEP's introduction, computers were fast enough to crack WEP encryption within a few minutes. WEP was deprecated and replaced with WPA encryption for 802.11g networks, starting in 2003. Updated June 13, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wep Copy

While it does not take much effort to create a basic regular expression

writing an advanced regex is not an easy task. Even the best programmers rarely get complex regular expressions right the first time. When used correctly

White Balance

White Balance White balance is a feature many digital cameras and video cameras use to accurately balance color. It defines what the color white looks like in specific lighting conditions, which also affects the hue of all other colors. Therefore, when the white balance is off, digital photos and recordings may appear to have a certain hue cast over the image. For example, fluorescent lights may cause images to have a greenish hue, while pictures taken on a cloudy day may have a blue tint. Since different types of lighting affect the way a camera's sensor captures color, most digital cameras and camcorders include an auto white balance (AWB) setting. The AWB setting automatically adjusts the white balance when capturing a photo or recording video. However, this setting may not always provide the most accurate color. Therefore, many cameras and camcorders also include preset white balance settings for different lighting conditions. Common options include fluorescent light, tungsten light (for typical indoor lighting), cloudy conditions, bright sunlight, and camera flash. By choosing the appropriate white balance preset, you may be able to capture pictures with more accurate color. Some high-end cameras and camcorders also include a custom white balance option. This feature allows you to take a sample of a white object, such as a white wall or a piece of paper, within the current lighting conditions. By manually setting the white balance to the white color within the sample image, you can set the white balance with a high degree of accuracy. Of course, if you find out you have already taken several photos with the incorrect white balance setting, you can adjust the color afterwards with an image editing program. Some digital video cameras also include a "black balance" setting, which is used to define how black should appear in the current lighting conditions. However, this setting is used far less commonly than white balance. Updated August 30, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/whitebalance Copy

White Box Testing

White Box Testing White box testing is a type of software testing in which the person understands the application design and source code. Using these insights, the person can run tests that reveal the most likely bugs or performance issues. Since the original developer is the one who best understands the program, he or she is best-suited to perform white box testing. However, other programmers familiar with the application can also run white box tests. In either case, the tester must have a solid understanding of the program's purpose and the underlying code. White box testing helps ensure a program is stable and performant. It is useful for stress-testing an app with large files or with edge cases that most users won't encounter. The bug fixing process is efficient since the person who discovers the errors often fixes them. On the other hand, the original developer may not use the app as a typical user and may not find errors or bugs another user may discover. White box testing is also not ideal for improving an app's feature set or user interface, where outside feedback is especially valuable. NOTE: Software testing in which the user has no insight into the internals of a program is called black box testing. Updated August 28, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/white_box_testing Copy

White Paper

White Paper A white paper is a written report that attempts to educate the reader about a complex topic. It presents well-researched factual information in an easy-to-understand manner while also offering the issuing organization's social, political, or business position on the subject. First used by government organizations, businesses in the IT industry have adopted the term for reports that explain new and emerging technologies and products like artificial intelligence and augmented reality. White papers are a common part of technology product marketing. A business may commission a white paper to outline the problem their product is meant to solve; other white papers may instead explain how a new technology operates while outlining its advantages over other technologies. Even when used as a marketing device, a white paper should still be backed by reliable research and present its findings academically. A white paper is generally not a short document — most are at least 5 to 10 pages. Despite the name "white paper," they're often colorful, well-designed digital documents. They will use illustrations and charts to help the reader understand the findings and position of the white paper's author. They are often distributed as web pages or PDF documents. Updated November 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/whitepaper Copy

Whitelist

Whitelist A whitelist is a list of items that are granted access to a certain system or protocol. When a whitelist is used, all entities are denied access, except those included in the whitelist. The opposite of a whitelist is a blacklist, which allows access from all items, except those included the list. The following are examples of different whitelist applications: A network administrator may configure a firewall with a whitelist that only allows specific IP addresses to access the network. A protected directory within a website may use a whitelist to limit access to certain IP addresses. Some e-mail systems can be configured to only accept messages from e-mail addresses that have added to a user's whitelist. Programmers can use whitelists within programs to ensure only certain objects are modified. Whitelists are a good option when only a limited number of entities need to be granted access. Because all items not included in a whitelist are denied access, whitelists are considered more secure than blacklists. However, if only a few entities need to be denied access, a blacklist is more practical. Updated September 29, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/whitelist Copy

WHOIS

WHOIS WHOIS (pronounced "who is") is an Internet service used to look up information about a domain name. While the term is capitalized, "WHOIS" is not an acronym. Instead, it is short for the question, "Who is responsible for this domain name?" Domain names are registered through companies called registrars. Examples include GoDaddy, Tucows, Namecheap, and MarkMonitor. These companies have been approved and accredited by ICANN to register new domain names. Whenever an individual or organization registers a new domain name, the registrar is required to make the registration information publicly available. This information can be looked up online using the WHOIS service. Since there is no central database of domain registration information, WHOIS search engines look up data across multiple registrars. Many registrars provide their own WHOIS lookup service, though several third party WHOIS websites also exist. ICANN provides its own WHOIS lookup service as well (see photo). What is a WHOIS record? WHOIS records vary between registrars, but they all contain mandatory information. This includes the name of the registrar, created date, updated date, and expiration date of the domain name. The name servers are also listed. Three contacts are included — the registrant, admin, and technical contacts. This information, provided during registration, includes a name, organization (if applicable), address, phone number, and email address. In many cases, the information is duplicated across all three contacts, though each one may have different information. NOTE: The contact information in a WHOIS record can be made private using a "private registration" service offered by some registrars. This service, which may be available for an additional fee, masks the registrant's contact information with non-personally identifiable information. A domain registered with domain privacy will still show up in a WHOIS search, but the organization may appear as "Whois Privacy Services" and the email address "abcd1234@whoisprivacyservices.com." Updated June 21, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/whois Copy

Whole Number

Whole Number A whole number is an integer that is 0 or greater. The first five whole numbers are 0, 1, 2, 3, and 4. They continue upwards to infinity. Whole numbers are almost identical to natural numbers except they include 0. This is important in computer science since numeric ranges often begin with zero. For example, the first record in an array is 0, rather than one. 24-bit RGB color provides a range of 0 to 255 for red, green, and blue values. These values are represented by whole numbers rather than natural numbers because they include 0. As the name implies, a whole number is not a fraction. It also cannot be negative. Since integers range from negative infinity to positive infinity, whole numbers are a subset of integers. Updated June 19, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/whole_number Copy

Wi-Fi

Wi-Fi Wi-Fi is a wireless networking technology that allows computers and other devices to communicate over a wireless signal. It describes network components that are based on one of the 802.11 standards developed by the IEEE and adopted by the Wi-Fi Alliance. Examples of Wi-Fi standards, in chronological order, include: 802.11a 802.11b 802.11g 802.11n 802.11ac Wi-Fi is the standard way computers connect to wireless networks. Nearly all modern computers have built-in Wi-Fi chips that allows users to find and connect to wireless routers. Most mobile devices, video game systems, and other standalone devices also support Wi-Fi, enabling them to connect to wireless networks as well. When a device establishes a Wi-Fi connection with a router, it can communicate with the router and other devices on the network. However, the router must be connected to the Internet (via a DSL or cable modem) in order to provide Internet access to connected devices. Therefore, it is possible to have a Wi-Fi connection, but no Internet access. Since Wi-Fi is a wireless networking standard, any device with a "Wi-Fi Certified" wireless card should be recognized by any "Wi-Fi Certified" access point, and vice-versa. However, wireless routers can be configured to only work with a specific 802.11 standard, which may prevent older equipment from communicating with the router. For example, an 802.11n router can be configured to only work with 802.11n devices. If this option is chosen, devices with 802.11g Wi-Fi chips will not be able to connect to the router, even though they are Wi-Fi certified. NOTE: The name "Wi-Fi" is similar to "Hi-Fi," which is short for "High Fidelity." However, "Wi-Fi" is not short for "Wireless Fidelity," but is simply a name chosen by the Wi-Fi Alliance. Updated March 11, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wi-fi Copy

Wi-Fi 6

Wi-Fi 6 Wi-Fi 6 (IEEE 802.11ax) is the sixth generation of Wi-Fi technology. It provides faster wireless speeds and handles multiple connections more efficiently than the previous 802.11ac standard, also known as Wi-Fi 5. Specs Wi-Fi 6 supports data transfer rates of up to 1,200 Mbps (1.2 gigabits per second) per stream. That is roughly 3x faster than Wi-Fi 5's top speed of 433 Mbps per stream. Wi-Fi 5 allows three simultaneous streams, for a total of 1,400 Mbps. Wi-Fi 6 supports four streams, for a theoretical maximum speed of 4,800 Mbps. The Wi-Fi 6 standard supports channel bandwidths of 20, 40, 80, and 160 MHz. It can operate on either 2.4 GHz or 5 GHz frequencies and is backward-compatible with older devices that do not support 802.11ax. An extension of the standard, called Wi-Fi 6E, provides a new 6 GHz frequency band to help reduce wireless congestion on the commonly-used 5 GHz band. The 5 GHz band allows faster data transmission rates than 2.4 GHz, but the 2.4 GHz band has longer range. Technologies Wi-Fi 6 includes two primary technologies that improve the wireless network efficiency: OFDMA and Multi-user MIMO. OFDMA (Orthogonal Frequency Division Multiple Access) enables connected clients to share channels. It reduces latency and improves efficiency, especially in busy wireless networks MU-MIMO (Multi-User Multiple Input, Multiple Output) allows more data to be transferred at one time. It enables access points to communicate with up to eight devices simultaneously. Requirements To use Wi-Fi 6, you need a Wi-Fi 6 router and a Wi-Fi 6 device, such as a smartphone or laptop with an 802.11ax wireless adapter. Wi-Fi technologies are backward-compatible, but if either end of the connection does not support Wi-Fi 6, the devices will communicate via the highest common standard. NOTE: While Wi-Fi 6 is theoretically several times faster than Wi-Fi 5, actual data transfer speeds are rarely close to the maximum transmission rate. Several factors can affect download speeds over Wi-Fi, including signal strength, number of active devices, and the Internet speed itself. Updated May 4, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wi-fi_6 Copy

Widget

Widget A widget is a small UI object on a desktop or home screen that provides quick access to information and functionality from an application without launching the app itself. An operating system's widget framework works with apps to allow them to display data in widgets even while the app isn't running. Widget support is part of most desktop and mobile operating systems, including Windows, macOS, iOS, and Android. Each widget is tailored to a single purpose in order to keep the information it presents concise and glanceable. A single app can often provide several widget options to allow you to decide exactly what information you want to see. For example, a weather app might provide one widget that displays current weather conditions and another that shows a 3-day forecast. Other common examples include a calendar that displays your upcoming appointments, a clock that displays an analog clock face with the current time, and a flight tracker that shows when a designated flight departs and lands. Several macOS desktop widgets Each operating system that supports widgets treats them differently. Windows 11 keeps them on a widgets board you can bring up using a taskbar button, keyboard shortcut, or touchscreen gesture. macOS can display widgets in the notification center and, as of macOS 14 Sonoma, on the desktop itself. Widgets on Android and iOS are added directly to the home screen and don't require any extra steps to see. Updated October 27, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/widget Copy

Wiki

Wiki A wiki is a type of website that allows users to collaboratively edit its content and structure from within a web browser. Most wiki sites are a collection of hyperlinked pages that serve as a knowledge base for an organization or online community. Wiki software is a style of content management system (CMS) running on a web server, also called a "wiki engine." Many different wiki engines are available. Some wiki engines are designed for public online communities, while others specialize in internal enterprise use. Public wikis usually emphasize large user bases and community discussion, allowing anyone to sign up and edit pages. Internal enterprise wiki sites instead focus on controlling access and editing privileges, integrating with other software, and managing a document library. A wiki administrator can choose whether to allow any user to edit content or restrict privileges to registered users — they can even specify multiple classes of users with different roles and permissions. Any user that can edit a wiki can do so using a simple markup language (or sometimes a rich text editor) directly in a web browser window without any extra software. Since allowing users to edit a wiki page's content can lead to accidental loss or vandalism, a wiki engine will keep track of every page's edit history so that other users can roll back unwanted changes. A wiki engine can also require approval of certain changes before they are published. NOTE: The most prominent example of a wiki on the Internet is Wikipedia. Updated November 8, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wiki Copy

Wildcard

Wildcard In computing, a wildcard refers to a character that can be substituted for zero or more characters in a string. Wildcards are commonly used in computer programming, database SQL search queries, and when navigating through DOS or Unix directories via the command prompt. Below are some popular uses for wildcards: Regular Expressions - A period (.) matches a single character, while .* matches zero or more characters and .+ matches one or more characters. ▶ Example: $pattern = "Mac(.*)" SQL Queries - A percent symbol (%) matches zero or more characters, while an underscore (_) matches a single character. ▶ Example: SELECT * FROM Computers WHERE Platform LIKE 'Mac%' Directory Navigation - An asterisk (*) matches zero or more characters, while a question mark (?) matches a single character. ▶ Example: dir *.exe In the examples above, wildcards are used to search for partial matches instead of exact matches. This can be helpful when searching for files or looking up information from a database. Updated May 26, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wildcard Copy

WiMAX

WiMAX WiMAX is a wireless communications standard designed for creating metropolitan area networks (MANs). It is similar to the Wi-Fi standard, but supports a far greater range of coverage. While a Wi-Fi signal can cover a radius of several hundred feet, a fixed WiMAX station can cover a range of up to 30 miles. Mobile WiMAX stations can broadcast up to 10 miles. While Wi-Fi is a good wireless Internet solution for home networks and coffee shops, it is impractical for larger areas. In order to cover a large area, multiple Wi-Fi repeaters must be set up at consistent intervals. For areas that span several miles, this is a rather inefficient method to provide wireless access and typically requires lots of maintenance. WiMAX, on the other hand, can cover several miles using a single station. This makes it much easier to maintain and offers more reliable coverage. WiMAX is also known by its technical name, "IEEE 802.16," which is similar to Wi-Fi's technical specification of 802.11. It is considered the second generation broadband wireless access (BWA) standard and will most likely be used along with Wi-Fi, rather than replace it. Since WiMAX has such a large signal range, it will potentially be used to provide wireless Internet access to entire cities and other large areas. In fact, some proponents of WiMAX predict it will eventually spread Internet access to all parts of the globe. Updated August 21, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wimax Copy

WIMP

WIMP Stands for "Windows, Icons, Menus, Pointer." WIMP is an acronym that emerged in the 1980s and describes the graphical user interface (GUI) of personal computers. It includes both Windows and Macintosh interfaces, as well as other less common operating systems, such as Linux and NeXT. While the terms GUI and WIMP are sometimes used interchangeably, WIMP is technically a subset of GUIs. This means all WIMP interfaces are GUIs, but not all GUIs are WIMPs. WIMP-based systems are designed to be used with a keyboard and mouse, since the mouse controls the pointer (or cursor) and the keyboard is used to enter data. Other GUIs may support different types of input, such as a touchscreen display. Modern GUIs, many of which have a touchscreen interface, are sometimes referred to as "post-WIMP" interfaces. Examples include iOS and Android, which are popular smartphone and tablet operating systems. These interfaces include icons, but often lack windows and menus. Since no mouse is required for a touchscreen interface, there is no pointer. NOTE: WIMP also stands for "Windows, IIS, MySQL, and PHP," which is variation of WAMP. It is a software package commonly installed on Windows-based web servers. Updated November 7, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wimp Copy

Win32

Win32 Win32 refers to the collection of Windows application programming interfaces (APIs) for developing 32-bit programs. Now that Windows supports 64-bit processors and applications, the API set is now known as Windows API (or WinAPI). Microsoft provides a software development kit, the Windows SDK, to developers to use to build software using the Windows API. The API provides software developers with a common set of tools and commands for their software to interact with the Windows operating system and the computer's hardware. For example, it includes a common framework for application windows, dialog boxes, scroll bars, and other user interface elements. It also provides ways for the program to interact with files, connect to hardware devices, and interact with other computers over a network. The first API set for Windows, then called the Windows API, created 16-bit programs for Windows versions 1.0 through 3.1. With the introduction of Windows NT and Windows 95, support for 32-bit programs was added. To differentiate between 16-bit and 32-bit programs, the early APIs were retroactively renamed Win16 and the new APIs were called Win32. Win32 / Windows API is not the only API that software developers can use to write programs for Windows. Other APIs Microsoft has introduced for Windows include .NET, Universal Windows Platform (UWP), and WinRT. However, the evolution of the Windows API through Win16, Win32, and the modern WinAPI affords a high level of backward compatibility. Software written for old versions of Windows can still run on newer versions, with some extra compatibility work done by the operating system. Updated October 7, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/win32 Copy

Window

Window A window is an area on the screen that displays information for a specific program. This often includes the user interface GUI as well as the program content. Windows are used by most applications as well as the operating system itself. A typical window includes a title bar along the top that describes the contents of the window, followed by a toolbar that contains user interface buttons. Most of the window's remaining area is used to display the content. Examples: Web Browser windows: The top of a typical Web browser window contains a title bar that displays the title of the current page. Below the title is a toolbar with back and forward buttons, an address field, bookmarks, and other navigation buttons. Below the toolbar is the content of the current Web page. The bottom of the window may contain a status bar that displays the page loading status Word Processing windows: A window used by a word processing program, such as Microsoft Word, typically includes buttons for page and text formatting, followed by a ruler that defines the document area. Below the ruler is the main page area used for entering text. Operating System windows: Windows used by the operating system typically include navigation buttons along the top and shortcuts to folders and other locations on the left side of the window. The rest of the window is used to display icons or lists of files and folders. Most windows can be opened, closed, resized, minimized, and moved around the screen. The close, minimize, and zoom buttons are located on the title bar (on the right side on Windows and the left side on Macs). Minimizing a window will close the contents of the window, but store a reference to it in the Taskbar (Windows) or the Dock (Mac). Closing a window will make it disappear completely (so you may be asked to save your changes first). To move a window, click on the title bar and drag the window where you want it. To resize a window, either click the Zoom button in the title bar or click the lower right-hand corner and expand or contract the window to the size you want. Updated August 28, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/window Copy

Windows

Windows Windows is a desktop operating system developed by Microsoft. For the past three decades, Windows has been the most popular operating system for personal computers. Each version of Windows comes with a graphical user interface that includes a desktop with icons and a task bar that is displayed at the bottom of the screen by default. The Windows "File Explorer" allows users to open multiple windows, browse folders, and open files and applications. Most Windows versions include a Start menu, which provides quick access to files, settings, and the Windows search feature. The task bar at the bottom of the screen is an easy way to identify a Windows desktop. Conversely, the macOS desktop has a menu bar at the top of the screen. In Windows, the close, minimize, and zoom buttons are located on the right side of each window's title bar, while in macOS, they are located on the left. The first version of Windows was released in 1985. Since then, the OS has gone through several major updates. A few of the most notable Windows releases include: Windows 3.1 (1992) Windows 95 (1995) Windows XP (2001) Windows 7 (2009) Windows 10 (2015) Windows 11 (2021) The above versions of Windows had long lifespans and were well-received by users. Some of the less-popular versions of Windows, which had shorter lifespans, include: Windows 98 (1998) Windows Me (2000) Windows Vista (2006) Windows 8 (2012) Microsoft has released most versions of Windows with multiple editions, tailored for different users. For example, Windows 11 is available in "Home" and "Pro" editions. The Home edition is sufficient for most users, while the Pro edition includes additional networking and administrative features useful in corporate workspaces. Windows runs on standard x86 hardware, such as Intel and AMD processors. Unlike Apple, Microsoft licenses the OS to multiple manufacturers. Therefore, numerous companies, such as Dell, HP, Acer, Asus, and Lenovo sell Windows PCs. Microsoft develops its own line of Windows Surface laptops as well. Software programs written for Windows may be called apps, applications, or executable files. Regardless of their label, Windows software programs have an .EXE file extension. 64-bit versions of Windows run both 32 and 64-bit apps, while 32-bit versions only run 32-bit applications. Updated March 27, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows Copy

Windows 10

Windows 10 Windows 10 is a major version of the Microsoft Windows operating system that was released on July 29, 2015. It is built on the Windows NT kernel and follows Windows 8. Part of the reason Microsoft decided to name the 2015 release "Windows 10" (and skipped "Windows 9") is because the operating system is designed to be a new direction for Microsoft. One of the primary aims of Windows 10 is to unify the Windows experience across multiple devices, such desktop computers, tablets, and smartphones. As part of this effort, Microsoft developed Windows 10 Mobile alongside Windows 10 to replaces Windows Phone – Microsoft's previous mobile OS. Windows 10 also integrates other Microsoft services, such as Xbox Live and the Cortana voice recognition assistant. While Windows 10 includes many new features, it also brings back the Start Menu, which was dropped in Windows 8. The new and improved Start Menu provides quick access to settings, folders, and programs and also includes tiles from the Windows 8 interface. The bottom of the Windows 10 Start Menu includes a search bar that allows you to search both your local PC and the web. Another major change in Windows 10 is the introduction of the "Edge" web browser, which is designed to replace Internet Explorer (IE). While the OS still includes IE, Edge is the default browser in Windows 10. Other new features include Continuum, which automatically optimizes the user interface depending on whether you are using an external keyboard or touchscreen, and Action Center, which is similar to the Notifications bar in OS X. Windows 10 also supports multiple desktops on a single monitor and provides Snap Assist, a feature that helps organize windows on the screen. One of the biggest differences between Windows 10 and previous releases of Windows is that the Windows 10 upgrade is available for free to Windows 7 and Windows 8 users. However, Microsoft still charges a licensing fee for copies of Windows 10 shipped with new computers and for non-upgrade purchases. The full version of Windows 10 Home is available for $120 and Windows 10 Pro costs $200. Updated July 30, 2015 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows_10 Copy

Windows 11

Windows 11 Windows 11 is a major version of the Microsoft Windows operating system. It was released on October 5, 2021, and follows Windows 10, the previous version of Windows, released in 2015. The Windows 11 user interface is similar to Windows 10 but includes several minor changes. For example, the task bar, left-aligned since Windows 95, is centered by default. Windows have slightly rounded corners (like macOS and a subtle blurred translucent effect. "Snap Layouts" make it easy to organize multiple windows in a preset grid. Widgets in Windows 11 provide a quick way to view frequently-accessed information, such as weather, calendar events, and to-do lists. Chat from Microsoft Teams is integrated into the OS, making it easy to connect with other users. The Microsoft Store provides a simple way to find and download applications. Windows 11 is a desktop operating system, meaning it is designed to run on desktop and laptop computers. While it is not a mobile OS, Windows 11 supports touchscreen input, available on many Windows machines. It also supports several "touch gestures" for scrolling, zooming, and hiding and showing windows. The Windows 11 system requirements at the time of launch are: CPU: 1 GHz or faster 64-bit processor (such as an Intel or AMD x86 CPU) GPU: Graphics card or integrated processor compatible with DirectX 12 or later RAM: 4 GB or more Storage: 64 GB storage device or larger Other hardware: Trusted Platform Module (TPM) chip, Secure Boot capable firmware Additional requirements: Internet connectivity, Microsoft account Updated November 20, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows_11 Copy

Windows 7

Windows 7 Windows 7 is an operating system released by Microsoft on October 22, 2009. It follows the previous (sixth) version of Windows, called Windows Vista. Like previous versions of Windows, Windows 7 has a graphical user interface (GUI) that allows you to interact with items on the screen using a keyboard and mouse. However, Windows 7 is also includes a feature called "Windows Touch" that supports touchscreen input and multitouch functionality. For example, you can right-click a file by touching it with one finger and tapping it with another. You can also zoom in on an image by touching it with two fingers, then spreading your fingers apart. Windows 7 is bundled with several touch-ready programs that are designed for touchscreen use. Windows 7 also includes several new multimedia features. One example is "Play To," a program that allows you to stream audio and video to different computers or devices within your house. The "HomeGroup" feature makes it easy to share media files and other data between computers. It also makes it possible to share printers on a home network. The "Remote Media Streaming" feature allows you to access the music, video, and photo libraries on your computer from remote locations. The search feature in Windows 7, called "Windows Search," allows you to see results of searches as soon as you start typing in the search box. Windows Search categorizes the results by file type and displays text snippets that indicate where the search phrase was found in each result. After the search results are returned, it is possible to narrow the results by filtering them by date, file type, file size, and other parameters. You can search local drives, external hard drives, and networked drives all using the standard Windows Search interface. Windows 7 is available in the following editions: Windows 7 Home Premium - the standard version installed with most home PCs and includes all of the features listed above. Windows 7 Professional - typically installed on business computers and includes all the Home Premium features as well as additional features such as Windows XP mode (XPM) and Domain Join. Windows 7 Ultimate - the most complete version, which has all of the Professional features as well as BitLocker data protection and additional language support. The system requirements for Windows 7 are: 1 GHz or faster 32-bit (x86) or 64-bit (x64) processor 1 GB of RAM or 2 GB of RAM for the 64-bit version 16 GB of available hard disk space or 20 GB for the 64-bit version DirectX 9 graphics device with WDDM 1.0 or higher driver Updated February 17, 2010 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows7 Copy

Windows 8

Windows 8 Windows 8 is a version of the Microsoft Windows operating system released on October 26, 2012. It was the first major update to Windows since Windows 7, which was released over three years earlier. While Windows 7 offered several performance improvements over Windows Vista, there were few changes to the look and feel of the operating system. Windows 8, on the other hand, provides an entirely new user interface compared to its predecessor. This interface (initially called "Metro," but now labeled the "Modern UI" style) displays a collection of tiles rather than a traditional desktop environment. These tiles provide access to commonly used programs and tools, such as Internet Explorer, Maps, Weather, Photos, Videos, Music, and the Windows Store. Several of the tiles, such as the Weather and social networking tiles are updated in real-time. The goal of the new Windows 8 interface is to function on both traditional desktop PCs, such as desktop computers and laptops, as well as tablet PCs. Windows 8 supports both touchscreen input as well as traditional input devices, such as a keyboard and mouse. This flexibility allows Windows 8 to run on a wide range of desktop and portable devices and it is especially well-suited for hybrid computers that include a touchscreen as well as a keyboard and mouse. For users that don't need the touchscreen functionality, Windows 8 still includes the traditional Windows desktop and Windows Explorer, which can be accessed from the home screen. In other words, if you don't want to use the new tile-based interface, you can simply bypass that "layer" of the interface and access the Windows desktop you are used to. Microsoft has also provided several performance improvements to Windows Explorer and added a few new interface elements, such as "File," "Home," "Share," and "View" tabs to the top of each window. Each of these tabs includes one-click access to multiple options. For example, the "View" tab allows you to show hidden files and show or hide file extensions, two options that used to require several steps to change in previous versions of Windows. Updated April 6, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows_8 Copy

Windows RT

Windows RT Windows RT is an operating system developed by Microsoft for mobile PCs with ARM-based CPUs. It was released on October 26, 2012, the same day as Windows 8 and the Microsoft Surface tablet. The Windows RT user interface is identical to Windows 8 and includes the same tile-based Start screen. It also provides access to the traditional Windows desktop, where you can manage files and folders and run applications. While you can use Windows RT with a standard keyboard and mouse, it is designed primarily for touchscreen input. The operating system features an "always on" design, which means it receives mail, contacts, and calendar updates even while in standby mode. While Windows RT looks and feels like Windows 8, it cannot run all of the applications that Windows 8 runs. Instead, Windows RT can only run software programs downloaded from the Windows Store. If you attempt to install an unknown third party app in Windows RT, you will see an error message saying the app cannot run on the PC. In order to install a new program on Windows RT, you must first click the Store tile on the Start screen and download the program directly from the Windows Store. Updated November 1, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windows_rt Copy

Windows Vista

Windows Vista Windows Vista is an operating system developed by Microsoft. The business version was released at the end of 2006, while the consumer version shipped on January 30, 2007. The Vista operating system includes an updated look from Windows XP, called the "Aero" interface. The desktop, windows, icons, and toolbars have a smoother 3D look, similar to the Mac OS X interface. These graphics are generated using the new Windows Presentation Foundation (WPF) graphics subsystem included with Windows Vista. Other improvements include faster-indexed file searching (which can locate text within files), built-in web services called the Windows Communication Foundation (WCF), support for the new XML Paper Specification (XPS) document format, and numerous security improvements. Windows Vista was code-named "Longhorn" for much of the development process. The operating system was originally slated to ship in 2003 as an update to Windows XP, but Microsoft decided to make additional updates to the operating system and scheduled it for release in 2005. Several delays pushed back the release date to 2006 and eventually to the beginning of 2007. To ship the consumer version by early 2007, the new file system called Windows Future Storage, or WinFS, was left out of the release and was later canceled. Overall, Vista is a significant upgrade to the Windows operating system. The interface feels more modern, file navigation has been improved, and system security has been designed to be stronger than Windows XP. If you plan to purchase Windows Vista for your system, you can choose one of five options: Business - designed for small business users and streamlined for work-oriented tasks Enterprise - meant for large, global organizations with complex IT infrastructures Home Basic - the most basic version of Vista designed for the average home user Home Premium - a more robust home version that includes extra security and multimedia features Ultimate - includes all the features from the Home Premium and Business versions of Vista The absolute minimum system requirements for Vista are: 800 MHz processor 512 MB of RAM 20 GB hard drive with at least 15 GB of available space Super VGA graphics support CD-ROM drive However, Microsoft recommends the following system requirements: 1 GHz 32-bit (x86) or 64-bit (x64) processor 1 GB of RAM 40 GB hard drive with at least 15 GB of available space DirectX 9 graphics support with a WDDM Driver 128 MB (minimum) of video RAM Pixel Shader 2.0 in hardware 32 bits per pixel DVD-ROM drive Audio Output Internet access Because many of Vista's new features require the recommended system requirements, it may be best to upgrade your operating system only if your computer meets or exceeds the recommended specifications. Otherwise, waiting to buy a new machine with Windows Vista preinstalled is probably the best choice. Updated November 29, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windowsvista Copy

Windows XP

Windows XP Microsoft Windows XP was introduced in 2001 and is the most significant upgrade to the Windows operating system since Windows 95. The previous version of Windows, called Windows Me (or Millennium Edition) still had the look and feel of Windows 95 and was known to have stability issues and incompatibilities with certain hardware. Windows XP addressed many issues of its predecessor and added a number of other improvements as well. It is a stable operating system since it is built on the Windows 2000 kernel, which is known for its reliability. XP also has a new, more modern look, and an interface that is more easy to navigate than previous versions of Windows. While not written from the ground up, like Mac OS X, Windows XP is a substantial system update. The letters "XP" stand for "eXPerience," meaning the operating system is meant to be a new type of user experience. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/windowsxp Copy

WINS

WINS Stands for "Windows Internet Name Service." WINS is a service that enables Windows to identify NetBIOS systems on a TCP/IP network. It maps NetBIOS names to IP addresses, which is a more standard way to identify network devices. WINS is similar to DNS, which is used for resolving domain names. Like DNS, WINS requires a server to perform name resolutions and return the IP address to client systems. However, instead of resolving domain names, WINS is used specifically to locate NetBIOS systems. For example, WINS may translate the NetBIOS name WorkComputer00 to the IP address 192.168.1.20. This allows other systems on the network to access WorkComputer00 directly by its IP address. Early versions of Windows (before Windows 2000) located network devices by their NetBIOS names. Therefore, Windows servers were required to run WINS in order to access network systems. Windows 2000 and later can use DNS to locate network systems, which means WINS is not required on modern Windows servers. However, many Windows systems still run WINS for backwards compatibility with older devices. Updated September 13, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wins Copy

Winsock

Winsock Winsock is an API that allows Windows software developers to connect their applications to the Internet. It provides a basic set of functions that developers can call on to help their apps communicate with remote computers and send or receive data without writing their own protocol interface. Originally named "Windows Socket API," Winsock controls the creation of network sockets that direct data to specific applications running on specific computers. It consists of a library of networking functions that developers can use to access basic network protocols, including TCP, UDP, IP, and ICMP (among others). It abstracts most details of establishing connections and transferring data to let the developer focus on other functionality in their application. If necessary, a developer can create raw sockets that allow an application to access other protocols directly. The Winsock API provides an interface between applications and the TCP/IP protocol networking stack The Winsock API was developed in 1992 by developers from several software companies (including Microsoft, Novell, 3Com, and Sun Microsystems) to provide a single standard API for Windows networking. Before Winsock, Windows did not have its own networking software stack and instead relied on network software from several vendors, who each provided their own API. This fragmentation made it difficult for developers to create programs that worked on multiple systems. After the release of Winsock as an add-on for Windows 3.1, developers could create network-connected applications that worked on all Windows computers. Every version of Windows since Windows 95 has included the Winsock library. Updated November 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/winsock Copy

Wired

Wired In computing terminology, the term "wired" is used to differentiate between wireless connections and those that involve cables. While wireless devices communicate over the air, a wired setup uses physical cables to transfer data between different devices and computer systems. A wired network is a common type of wired configuration. Most wired networks use Ethernet cables to transfer data between connected PCs. In a small wired network, a single router may be used to connect all the computers. Larger networks often involve multiple routers or switches that connect to each other. One of these devices typically connects to a cable modem, T1 line, or other type of Internet connection that provides Internet access to all devices connected to the network. Wired may refer to peripheral devices as well. Since many keyboards and mice are now wireless, "wired" is often used to describe input devices that connect to a USB port. Peripherals such as monitors and external hard drives also use cables, but they are rarely called wired devices since wireless options are generally not available. While many peripherals are now wireless, some users still prefer wired devices, since they have a few benefits over their wireless counterparts. For example, an Ethernet connection is not prone to signal interference that can slow down Wi-Fi connections. Additionally, wired network connections are often faster than wireless ones, which allows for faster data transfer rates. Some users also prefer wired peripherals since there is no need to replace batteries on a regular basis. Gamers especially prefer wired keyboards and mice since they have lower latency and can be backlit, thanks to the power provided by the USB connection. Updated March 12, 2016 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wired Copy

Wireframe

Wireframe A wireframe is a three-dimensional model that only includes vertices and lines. It does not contain surfaces, textures, or lighting like a 3D mesh. Instead, a wireframe model is a 3D image comprised of only "wires" that represent three-dimensional shapes. Wireframes provide the most basic representation of a three-dimensional scene or object. They are often used as the starting point in 3D modeling since they create a "frame" for 3D structures. For example, a 3D graphic designer can create a model from scratch by simply defining points (vertices) and connecting them with lines (paths). Once the shape is created, surfaces or textures can be added to make the model appear more realistic. The lines within a wireframe connect to create polygons, such as triangles and rectangles, that together represent three-dimensional shapes. The result may be as simple as a cube or as complex as a three-dimensional scene with people and objects. The number of polygons within a wireframe is typically a good indicator of how detailed the 3D model is. 3D Rendering Since wireframes only contain vertices and lines, they can be rendered very quickly. Therefore, a complex 3D image or animation might be reduced to a wireframe model so it can be rendered and edited in real-time. While wireframe representations of 3D animations were commonly used in early 3D design, today's GPUs are so fast, they can often render fully shaded surfaces and textures in real-time with little to no lag. NOTE: A "wireframe" may also refer to a mockup or prototype of a website layout or app design. Updated March 22, 2017 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wireframe Copy

Wireless

Wireless In the computing world, the term "wireless" can be rather ambiguous, since it may refer to several different wireless technologies. The two most common types of wireless capabilities computers have are Wi-Fi and Bluetooth. Wi-Fi is the technology used for wireless networking. If your computer has a wireless card, it is most likely Wi-Fi compatible. The wireless card transmits to a wireless router, which is also based on the Wi-Fi standard. Wireless routers are often connected to a network, cable modem, or DSL modem, which provides Internet access to anyone connected to the wireless network. Bluetooth is the technology often used for wireless keyboards and mice, wireless printing, and wireless cell phone headsets. In order to use a device such as a Bluetooth keyboard or mouse, your computer must be Bluetooth-enabled or have a Bluetooth adapter installed. Computers may also use other wireless technologies aside from Wi-Fi and Bluetooth. Products such as remote controls and wireless mice may use infrared or other proprietary wireless technologies. Because of the many wireless options available, it is a good idea to check the system requirements of any wireless device you are considering buying. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wireless Copy

Wizard

Wizard A wizard is a part of an application's user interface that guides a user through several steps to complete a task. It takes a complicated process and breaks it into smaller steps, directing the user along and explaining each option. A wizard typically uses friendlier, more natural language than other interface elements to make it approachable for less-experienced users. A typical wizard appears in a separate dialog box, sidebar, or other distinct part of an application window. It presents one option or setting at a time and provides context for that option to help you understand the result of each choice. It also lets you know how far along in the process you are, indicating which step you're on and how many steps remain. For example, Microsoft Word includes a Mail Merge wizard that helps guide you through automatically creating form letters using a database of contact information. It asks what type of document you want to make, has you select a list of recipients in a database or spreadsheet, then helps you write a letter by inserting automatic name and address information. It can even send the finished letters to the printer at the end. Experienced users can skip the wizard and do each step on their own, but the wizard ensures that you don't forget a necessary step and tailors each part towards the type of document you want to make. Microsoft Word displaying the Mail Merge wizard in a sidebar Many types of software use wizards to help guide you through a task. Windows, macOS, and mobile operating systems use wizards (or assistants) to guide you through the installation process and set up options as you like. Application installers use them to let you select a destination folder and which features to include or exclude. Spreadsheet and database applications use wizards to guide you through data import and export options, and desktop publishing applications often use them to help you set up a document exactly how you want. NOTE: The term "wizard" is most often associated with Windows and Windows applications, while other platforms like macOS and iOS use "assistant" instead. Updated September 7, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wizard Copy

WLAN

WLAN Stands for "Wireless Local Area Network." A WLAN, or wireless LAN, is a network that allows devices to connect and communicate wirelessly. Unlike a traditional wired LAN, in which devices communicate over Ethernet cables, devices on a WLAN communicate via Wi-Fi. While a WLAN may look different than a traditional LAN, it functions the same way. New devices are typically added and configured using DHCP. They can communicate with other devices on the network the same way they would on a wired network. The primary difference is how the data is transmitted. In a LAN, data is transmitted over physical cables in a series of Ethernet packets. In a WLAN, packets are transmitted over the air. As wireless devices have grown in popularity, so have WLANs. In fact, most routers sold are now wireless routers. A wireless router serves as a base station, providing wireless connections to any Wi-Fi-enabled devices within range of the router's wireless signal. This includes laptops, tablets, smartphones, and other wireless devices, such as smart appliances and smart home controllers. Wireless routers often connect to a cable modem or other Internet-connected device to provide Internet access to connected devices. LANs and WLANs can be merged together using a bridge that connects the two networks. Wireless routers that include Ethernet ports can automatically combine wired and wireless devices into the same network. Advantages of WLANs The most obvious advantage of a WLAN is that devices can connect wirelessly, eliminating the need for cables. This allows homes and businesses to create local networks without wiring the building with Ethernet. It also provides a way for small devices, such as smartphones and tablets, to connect to the network. WLANs are not limited by the number of physical ports on the router and therefore can support dozens or even hundreds of devices. The range of a WLAN can easily be extended by adding one or more repeaters. Finally, a WLAN can be easily upgraded by replacing routers with new versions — a much easier and cheaper solution than upgrading old Ethernet cables. Disadvantages of WLANs Wireless networks are naturally less secure than wired networks. Any wireless device can attempt to connect to a WLAN, so it is important to limit access to the network if security is a concern. This is typically done using wireless authentication such as WEP or WPA, which encrypts the communication. Additionally, wireless networks are more susceptible to interference from other signals or physical barriers, such as concrete walls. Since LANs offer the highest performance and security, they are still used for many corporate and government networks. NOTE: WLAN should not be confused with "WAN," which is a wide area network. Updated May 22, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wlan Copy

Word

Word Microsoft Word is a word processing program developed for the Windows and Macintosh platforms. It is included in all editions of the Microsoft Office suite and is also available as a standalone program for Windows. The first version of Word was released for DOS in 1983 and had a text-only interface. In 1985, Microsoft released a graphical version Word for the Mac OS. Microsoft released "Word for Windows" in 1989, and it soon became the most popular word processing program on any platform. In 1995, Microsoft changed the name of Word for Windows to simply "Word" and bundled the program with Office 95. Since then, several versions of Word have been released for both the Windows and Macintosh platforms. While early versions of Word served as basic text editors, modern versions include many advanced word processing features. Some examples include spellchecking, auto-correct, text formatting, styles, and custom page layouts. Word documents may also include headers, footers, images, shapes, tables, charts, and "SmartArt," which is used for visually representing concepts and ideas. Word also includes a "Track Changes" feature that allows editors to review documents and add comments and make changes. For many years, Word saved documents in a proprietary binary format. However, as other office suites began using open formats, Microsoft transitioned to the "Office Open XML" format. This format was introduced with Office 2007 for Windows and Office 2008 for Mac OS X. Now, Microsoft Word saves documents in a standard XML format that is compressed using Zip compression. This new format is more compatible with other software and uses less disk space. File extensions: .DOC, .DOCX, .DOT, .DOTX Updated March 22, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/word Copy

Word Processor

Word Processor A word processor is a software program for creating and editing rich text documents. Word processors include more features than simple text editors, offering users the ability to customize fonts, text styles, and page layouts. They often include proofreading features like spell check and grammar checks. Word processors typically include toolbars and panels that provide advanced editing capabilities. Examples include line and paragraph spacing, text alignment, list formatting, and indentation margins. Some word processing applications allow you to create text styles, such as titles and subtitles, which you can apply to text throughout a document. For example, you can create a "Header 1" style for section headers. If you change the style, it will update all the section headers in the document. Microsoft Word toolbar (macOS) Page layout controls allow you to adjust the overall structure of a document and how it will look when printed. For instance, you can adjust the size of the page margins to increase or decrease the space between the text and the edge of the page. You can also add headers and footers that appear in the top or bottom margins of every page. Many apps allow you to insert automatic page numbers as well. A word processor is one of the core components of an "office suite," along with a spreadsheet application and presentation software. Some office suites include several other applications as well. Examples include Microsoft Word, Apple Pages, OpenOffice Writer, and Google Docs. Updated March 8, 2023 by Brian P. Reviewed by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/word_processor Copy

Word Wrap

Word Wrap Word wrapping is when a line of text automatically "wraps" to the following line when it reaches the edge of a page or text field. Most text editors and word processors use word wrap to keep the text within the default margins of the page. Without word wrap enabled, the text continues on one line until the user presses "Enter" or "Return" to insert a line break. If the text extends past the window margin, the user must scroll horizontally to see the whole line. When word wrap is enabled, the text editor takes the first word that does not fit on a line of text and moves it to the beginning of the next line. Word processing programs can also automatically hyphenate long words at the end of a line. You can also prevent word wrap from breaking for a new line between two words by using a non-breaking space character instead of a normal space. Text editors treat two words separated by a non-breaking space as if they are a single word, instead breaking at the next available normal space. NOTE: While it is uncommon, sometimes a word (or string of characters) will take up more than one line. In this case, the word wrap feature will break the string of characters once it reaches the end of the line and continue it onto the next one. Updated February 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wordwrap Copy

WordArt

WordArt WordArt is a text modifying feature in Microsoft Word, a popular word processing program. It includes effects such as shadows, outlines, colors, gradients, and 3D effects that can be added to a word or phrase. WordArt can also bend, stretch, skew, or otherwise modify the shape of the text. WordArt can be added within a Word document by selecting Insert→Picture→WordArt... This opens the WordArt dialog box, which gives the user several text styles to choose from. Once a style has been selected, the user types the text that the style will be applied to and the result is saved as an image within the document. The WordArt can then be moved or modified by selecting Modify→WordArt. The OpenOffice.org office suite has a similar feature to WordArt, called FontWork. Photoshop 7 and later also includes a Warp Text feature, which provides similar text modification options. Updated December 3, 2007 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wordart Copy

WordPress

WordPress WordPress is a free content management system used to build and maintain websites. Its ease of use and unique blogging features have helped it become the most popular blogging tool on the web. WordPress provides a web-based user interface for designing, publishing, and updating websites. Instead of writing HTML, you can simply choose one of many different website templates or "themes" that has a design you like. You can then modify the layout and build a custom navigation bar. Once the site layout is complete, you can use WordPress' online interface to create individual pages. Each page may include formatted text, links, images, and other media. You can publish completed webpages or blog updates by simply clicking the Publish button. The WordPress interface makes it easy for anyone without web development experience to create and publish a website. The built-in blogging tools provide a simple way to track individual posts, visitors, and user comments. If WordPress' built-in capabilities are not sufficient for your needs, you can install different plug-ins that provide extra features. Examples include social media buttons, image galleries, and web forum tools. While there are thousands of WordPress templates and plug-ins available, the WordPress system still has its limitations. Because it is a template-based service, you must begin with a pre-built website rather than create pages from scratch. Additionally, you cannot insert scripts or maintain a database with the same level of control that a custom website offers. Therefore, most companies and large organizations still hand-code websites or develop them using a WYSIWYG editor like Dreamweaver. For many individuals, however, WordPress provides a great blogging system that is both convenient and free. There is no cost to sign up for WordPress or to use the service. It is even free to publish a custom website or blog at the wordpress.com domain (such as custom-name.wordpress.com). However, if you want a custom website address or need extra features, such as more disk space or premium themes, WordPress offers "Premium" and "Business" services for an annual fee. Updated January 22, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wordpress Copy

Workstation

Workstation A workstation is a high-performance computer configured to perform specific computationally-intensive tasks, such as scientific research, 3D modeling/animation, or audio/video production. They typically have more powerful processors than a home desktop computer and large amounts of storage and memory. Workstations in a workplace are often set up for use by a single user and networked together to share data and resources with others. The components used in a workstation are often similar to those in any other desktop computer, only with increased performance and higher reliability. For example, many workstations use a type of error-correcting memory known as ECC RAM, which is more expensive but also far more reliable. The increased cost of highly-reliable components is justified when workstations are professional tools used for professional work, and any downtime can be costly. Workstations may also have extra hardware that is necessary for the niche that the workstation fills. An audio production workstation, for example, would include extra audio interface ports designed to connect professional microphones; a video production workstation, meanwhile, would include specialized video processing hardware to handle multiple high-resolution video streams at once. Many workstations are part of a network of computers linked together to process data or perform computational work. In these cases, the workers interact directly with their workstations and merely hand off tasks to servers for processing. For example, 3D animators use their workstations to construct models, set up scenes, and animate character movements. They can see previews of their work on their workstations, but for final rendering, they send that data to a render farm consisting of multiple linked server computers. Updated January 4, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/workstation Copy

Worm

Worm The word "worm" can mean different things, depending on the context. It most frequently refers to a type of computer virus. It may also refer to a type of write-once data storage. Computer Virus A worm is a variety of computer virus, similar to a Trojan Horse virus, that replicates itself without any human interaction. Once a worm has infected a host computer, it typically scans the local network for other computers with a particular vulnerability; when it finds one, it exploits that vulnerability to infect that computer, then scans the network again for another computer to infect. Worms automatically spread themselves over a network when activated, but they can initially infect a computer using the same tricks other kinds of viruses do. They can spread using a file sent as a suspicious email attachment or file download. These files don't need to be executable files to spread a worm; hackers have found vulnerabilities in other file formats that allow them to embed malicious code in images, documents, and video and audio files. Worms can also spread online via code in HTML web pages that exploit vulnerabilities in web browsers. While this self-replicating behavior is what defines a worm, the other effects of a worm vary from worm to worm. Many worms are designed only to spread themselves; these worms cause computer systems and networks to slow down by gradually filling up storage space and memory and overloading network bandwidth. Other worms act as a way to spread a payload, carrying another virus that could delete files, encrypt files to hold for ransom, install spyware, or create a backdoor to allow remote control of a computer. As with other types of viruses, running antivirus software and keeping the virus definitions up-to-date will help defend against worms. Optical Media WORM (stands for "Write Once, Read Many") is a broad term for data storage media that can be written to once, then becomes read-only. Using a WORM data storage method ensures that the data written to the medium remains unaltered. Early WORM data storage devices used writable optical discs stored in a protective cartridge. These drives were expensive and very large (with cartridges 12 inches or more in diameter) but had relatively high storage capacity and were useful for archiving data. They were eventually made obsolete by the introduction of writable CD-R discs; writable DVD-R and DVD+R discs are now the most commonly-used form of WORM media. Updated December 12, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/worm Copy

WPA

WPA Stands for "Wi-Fi Protected Access." WPA is a security protocol for creating secure Wi-Fi networks. It was introduced in 2004 to replace the previous security protocol, WEP, due to vulnerabilities in WEP's encryption method. WPA uses 256-bit encryption and is considered the minimum standard for wireless network security, supported by all Wi-Fi-certified devices since 2006. Later generations (WPA2 and WPA3) are even more secure. WPA uses encryption to protect traffic over wireless networks from eavesdropping. Both sides of a wireless connection need to use the same encryption/decryption key. WEP was vulnerable to eavesdropping because it used the same encryption key for each data packet that traveled over a network. WPA uses a technique called Temporal Key Integrity Protocol (TKIP) to generate a new encryption key for each data packet, making it more secure than WEP. The WPA specification supports two operation modes: WPA-Personal and WPA-Enterprise. WPA-Personal uses a pre-shared key for encryption; this key also functions as the Wi-Fi network's passphrase. Every device that connects to a WPA-Personal network uses the same passphrase, and changing that passphrase requires every client to update their settings. WPA-Enterprise instead uses an authentication server to validate each user's unique login credentials. A WPA-Enterprise access point generates a new encryption key each time a user connects to the network. WPA and WPA2 are the most common Wi-Fi security protocols, available in Personal and Enterprise implementations The developers of WPA, the Wi-Fi Alliance industry group, intended for devices designed for WEP to receive firmware updates to add support for WPA. Running on the same hardware as WEP limited the computational resources available to WPA encryption. While WPA is more secure than WEP, it is still vulnerable to attacks that can decrypt data packets encrypted using TKIP. Access points designed for WPA2 have more powerful hardware, allowing them to use the more secure Advanced Encryption Standard (AES) encryption method. Updated June 23, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/wpa Copy

WPA2

WPA2 Stands for "Wi-Fi Protected Access 2." It is the second version of WPA, a technology used for secure Wi-Fi connections. Most Wi-Fi devices manufactured after 2006 support both WPA and WPA2. WPA vs WPA2 Both WPA and WPA2 provide encrypted data transfers over a Wi-Fi connection. Both require a password with a minimum length of 8 characters. The technologies differ in the way they encrypt the data. WPA uses the Temporal Key Integrity Protocol (TKIP) to dynamically vary the encryption key shared between the access point and connected clients. The continually changing key provides more secure authentication than the earlier WEP wireless encryption standard. WPA2 further improves Wi-Fi authentication with CCMP, the "Cipher Block Chaining Message Authentication Code Protocol," which requires a 128-bit key. WPA uses the same RC4 encryption method as WEP, which is considered weak encryption by modern standards. WPA2 requires AES (Advanced Encryption Standard), which is significantly stronger and harder to break. AES encryption supports 128, 192, and 256-bit keys. NOTE: Most Wi-Fi networks use WPA2 by default. If you are offered a choice between WEP, WPA, and WPA2, it is best to choose WPA2 since it is the most secure. Updated November 28, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wpa2 Copy

WPAN

WPAN Stands for "Wireless Personal Area Network." A WPAN is a specific type of PAN (Personal Area Network) in which all the connections are wireless. It may use multiple types of wireless connections, but the most common is Bluetooth. The "personal" aspect of a WPAN means it is meant to be used by a specific individual. For example, you might have a smartphone and smartwatch that communicate with each other. These devices securely transmit encrypted information back and forth, such as activity data, text messages, and alerts. The WPAN provides consistent communication between the devices and prevents the data from being shared with devices outside the network. While most WPANs communicate over Bluetooth, devices may also transfer data over Wi-Fi. For example, a laptop may connect to a smartphone via Wi-Fi to make and receive calls. A tablet may sync with a smartphone over a home Wi-Fi network. In some cases, other radio frequencies or infrared signals may also be used. Since WPANs are wireless, they are simple to set up and maintain. Once devices are paired using Bluetooth, they can automatically discover each other when in range, typically within 30 meters. A low-energy version of Bluetooth, called BLE, is often used to locate paired devices and establish connections between them. Data transfers, which require a stronger signal, take place over standard Bluetooth or Wi-Fi. Updated April 24, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wpan Copy

Wrapper

Wrapper In computer science, a wrapper is any entity that encapsulates (wraps around) another item. Wrappers are used for two primary purposes: to convert data to a compatible format or to hide the complexity of the underlying entity using abstraction. Examples include object wrappers, function wrappers, and driver wrappers. 1. Object Wrapper An object wrapper is a class that encapsulates a primitive data type or another object. It may be used in Java, for example, to convert a char primitive to a character class. By converting the primitive to a class, a developer can use a method, such as toUpperCase() to modify the data. An object wrapper may also be used to convert the properties of a legacy class to ones that are compatible with newer code. 2. Function Wrapper A function wrapper encapsulates one or more functions. For example, a website's "send mail" function may wrap multiple functions that process form data, check the submission for spam, and send the message using a mail server. A function wrapper may also wrap a single function to allow it to work with newer or older code. For example, it may change or add parameters to make a function compatible with a newer API. 3. Driver Wrapper A driver wrapper allows a driver to function with an otherwise incompatible operating system. For example, an older graphics card may only support drivers designed for Windows 7. If a Windows 10 driver is not available, a driver wrapper may serve as an adapter, allowing the graphics card to use the Windows 7 driver in Windows 10. Driver wrappers may be provided by either the original equipment manufacturer (OEM) or a third-party developer. Updated August 6, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wrapper Copy

WWW

WWW Stands for the "World Wide Web." The World Wide Web is a subset of the Internet consisting of websites and webpages that are accessible via a web browser. It is also known simply as "the Web." While the terms "the Internet" and "the Web" are often used interchangeably, it is important to note that the Web is a collection of sites, documents, and resources that you can browse; the Internet is the network itself that also carries other types of data and traffic like SSH connections, FTP file transfers, online gaming, and email. Websites and other resources can be identified by text strings called Uniform Resource Locators, or URLs. A URL includes the domain name for a website, as well as the path to a specific page or file. A URL also includes the protocol used to access that webpage or file. Web servers use the Hypertext Transport Protocol (HTTP), or its secure successor HTTPS, to transfer data to the client computers that request them. The web pages themselves contain content written in Hypertext Markup Language (HTML), along with CSS for text formatting and JavaScript for interactive elements. Clickable hyperlinks enable navigation between pages and websites. A website open in the Safari web browser You can browse websites and other resources using a web browser. Early web browsers like Mosaic could only display text, while modern web browsers can display images, videos, and interactive elements. You can navigate to websites directly by entering a URL into your browser, or you can use a search engine to find what you are looking for. The Web can also help you access web applications, which are similar to desktop applications but run on a remote server and are controlled through the browser. Updated June 22, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/www Copy

WYSIWYG

WYSIWYG Stands for "What You See Is What You Get," and is pronounced "wihzeewig." WYSIWYG refers to software that accurately represents the final output during the development phase. For example, a desktop publishing program such as Photoshop is a WYSIWYG graphics program because it can display images on the screen the same way they will look when printed on paper. Word processing programs like Microsoft Word and Apple Pages are both WYSIWYG editors, because they include page layout modes that accurately display what the documents will look when printed. While WYSIWYG originally referred to programs that produce physical output, the term is now also used to describe applications that produce software output. For example, most Web development programs are called WYSIWYG editors since they show what Web pages will look like as the developer is creating them. This means that the developer can move text and images around the page to make it appear exactly how he or she wants before publishing the page on the Web. When the page is published, it should appear nearly the same on the Web as the way it looked in the Web development program. Of course, as most Web developers know, there is no guarantee that a Web page will look the same in two different browsers, such as Internet Explorer and Firefox. But at least a WYSIWYG editor can give developers a close approximation of what the published page will look like. Updated June 11, 2008 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/wysiwyg Copy

x64

x64 x64 is shorthand for 64-bit processor architecture. It is often contrasted with x86 architecture, which refers to 32-bit Intel processors, including the 386, 486, and 586 series. However, x64 refers to all 64-bit processors, regardless of the manufacturer. The "x86-64" label specifies a 64-bit x86 processor. The primary difference between a 32-bit and 64-bit processor is how the CPU addresses memory. A 32-bit processor can reference 232 or 4,294,967,296 addressable values. A 64-bit processor can access 264 values. 264 is not double 232, but 4,294,967,296 times more. Therefore, a 64-bit processor can reference 18,446,744,073,709,551,616 values. A 32-bit processor can only access about 4 GB of RAM. A 64-bit processor can access over 4 billion times more memory than a 32-bit processor, removing any practical memory limitations. x64 processors can run 64-bit applications, which are built and compiled for 64-bit hardware. It is important to make sure your processor meets the system requirements of a new application before installing it. Generally, you should install the 64-bit version if given a choice. History Throughout the 1980s and 1990s, most PC processors were 32-bit. In 1996, Nintendo released the Nintendo 64 gaming console, one of the first mass-market 64-bit devices. Ironically, the console only had 4 megabytes or RAM, or 1/1000th the 4 gigabytes limitation of a 32-bit processor. But it paved the way for more 64-bit processors. Between 2000 and 2010, x64 processors grew in popularity. Both Microsoft and Apple released 64-bit versions of their operating systems. Since 2010, nearly all desktop and mobile devices have been built with x64 processors. Most applications are now 64-bit as well. NOTE: In 2019, Apple released macOS 10.15 Catalina, which dropped support for 32-bit applications. As of 2020, Microsoft Windows still supports both 32-bit and 64-bit applications. Updated July 29, 2020 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/x64 Copy

x86

x86 x86 is the name of Intel's family of processors and the instruction set architecture that they share. It has been the most popular processor architecture since the 1980s, powering most personal computers running Microsoft Windows and servers running Unix and Linux. The x86 family has previously included 16-bit and 32-bit processors, and the most recent incarnation of the instruction set is known as x86-64 since it now supports 64-bit processors. The x86 architecture uses a complex instruction set (CISC) that has evolved over time. Extensions to the instruction set provide new capabilities while maintaining backward compatibility, allowing new processors to run software written and compiled for previous generations. This stability helped maintain its dominance in the personal computer market, as users could continue to use the same programs on new computers. When compared to other processor architectures, x86 is known for its high performance combined with high power consumption. This makes it ideal for desktop and server computers, while mobile devices like tablets and smartphones use low-power ARM processors instead. The x86 series began with the original 16-bit 8086 and 8088 CPUs, followed by later processors like the 80186, 80286, 80386, and 80486. The x86 name comes from the final three digits of the product number, with the x indicating the number that increments each generation. Even though Intel switched from using product numbers to using names like Pentium, Core, and Xeon, their processors continue to use the x86 instruction set. Intel and AMD are the primary manufacturers of x86 CPUs. Updated August 16, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/x86 Copy

x86-64

x86-64 x86-64 is a 64-bit version of the x86 processor architecture, which Windows PCs have used for several decades. It is similar to x64, but refers specifically to processors that use the x86 instruction set. CPUs with the x86-64 architecture run in 64-bit mode by default, but are also backward-compatible with 32-bit and 16-bit applications. In other words, any program that runs on a 32-bit x86 CPU should run on an x86-64 CPU with no emulation required. The main benefit of an x86-64 processor versus a standard 32-bit x86 CPU is significantly more addressable memory. A 32-bit processor can only reference 232 (4,294,967,296) addressable values in memory — roughly 4 gigabytes. A 64-bit processor supports 264 (18,446,744,073,709,551,616) addressable values. Therefore, an x86-64 CPU does not support double the memory of an x86 CPU, but 4,294,967,296 times more. Read more about the difference between 32-bit and 64-bit systems in the TechTerms Help Center. Intel and AMD are the two primary manufacturers of X86-64 processors. The two technologies are labeled Intel 64 and AMD64, respectively. AMD released its first 64-bit CPU (Operteron) in 2003, followed shortly by Intel, which released its first 64-bit CPU (Nocona) in 2004. Today, nearly all processors are 64-bit. Updated March 6, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/x86-64 Copy

XaaS

XaaS Stands for "Everything as a Service." XaaS is an extension of SaaS (software as a service) with a broader scope that covers a range of IT services. All XaaS offerings have two defining criteria: They are cloud-based and require a monthly or annual subscription. Examples of XaaS services include: Software File storage Online backup Data security Data analytics Collaboration tools Streaming services Some XaaS tools have a web interface, some have desktop and mobile apps, and others provide both. Apple iCloud, for example, allows access to the bundled services through the web, macOS, and iOS apps. XaaS offerings span consumer and enterprise markets. Services like Netflix and Spotify are primarily geared towards consumers, while SAP and Salesforce tools are aimed at businesses. Some services, such as Microsoft 365 and Google Docs, are available to both. Updated August 2, 2022 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/xaas Copy

XAML

XAML Stands for "Extensible Application Markup Language" and is pronounced "zam-uhl." XAML is a markup language developed by Microsoft and is used for creating application interfaces. It is similar to HTML, which defines the content of a webpage. Like other markup languages, XAML uses tags to define objects. Tags can be nested within other tags to define objects within objects. The attributes of an object, such as the name, size, shape, and color, are defined within the tag. Below is an example of a basic XAML tag for a button: XAML was introduced in 2006 along with WPF (Windows Presentation Foundation). WPF is an extension of the Microsoft .NET Framework that includes a display engine for rendering interface elements within a Windows application. XAML is used to define and link these elements. Developers can create XAML code from scratch or use a program like Microsoft Expression Studio or Blend for Visual Studio to generate XAML code using a WYSIWYG editor. XAML is supported by any Windows app that is built using WPF or the Universal Windows Platform. File Extension: .XAML Updated February 3, 2018 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/xaml Copy

XHTML

XHTML Stands for "Extensible Hypertext Markup Language." XHTML is markup language used to create webpages. It is similar to HTML but uses a more strict XML-based syntax. The first version of XHTML (1.0) was standardized in 2000. For several years, XHTML was the most common language used to create websites. It has since been superseded by HTML5. Why XHTML? As HTML evolved over the first few decades of the web, browsers became increasingly lenient in how they parsed webpage source code. The result was that websites were rendered inconsistently between browsers. One of the main goals of XHTML was to ensure webpages looked the same across multiple browsers. Since XHTML is based on XML rather than HTML, webpages coded in XHTML must conform to a strict XML syntax. A webpage that uses the "XHTML Strict" doctype (DTD) cannot contain any errors or invalid tags, leaving no ambiguity for the web browser. However, most XHTML sites used the "XHTML Transitional" doctype, which does not require perfect syntax and even allows HTML 4.01 tags. From 2001 to about 2011, XHTML was the standard markup language for web development. Some developers used a strict XHTML DTD, though most used transitional doctype. Since most web developers preferred a more flexible language, the web eventually transitioned back to HTML. In 2014, HTML5 was officially recommended by the W3C. Most modern browsers still support both HTML and XHTML. Updated December 28, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/xhtml Copy

XLR

XLR Stands for "External Line Return." XLR is a type of connector commonly used in audio production. A standard XLR plug has three pins (as shown in the image below), which provide a balanced analog audio signal, reducing interference and noise. Most stage microphones have an XLR connector, which is why XLR cables are often called "mic cables." The three pins of an XLR cable include two signal wires and one ground. Pin 1: Ground (shield) - protects the signal from external interference Pin 2: Positive (hot) - carries the original audio signal Pin 3: Negative (cold) - carries an inverted (180° phase-shifted) version of the audio signal XLR cable with female and male connectors It is a common misconception that the L and R in XLR stand for Left and Right. Instead, the three letters stand for Ground, Live (hot), and Return (cold). Noise Cancellation over XLR When transmitting audio, the hot cable (pin 2) carries the original audio signal, while the cold cable (pin 3) carries an inverted (180° phase-shifted) version of the signal. As the audio travels through an XLR cable, both wires are exposed to the same interference, but the audio signal remains inverted on one of them. The receiving device, such as a mixing console or audio interface, inverts the phase of one signal and sums the two. Since both wires picked up the same interference, the "noise" cancels out. The balanced design of XLR cables makes them ideal for long cable runs in live sound setups, broadcasting, and recording studios. They can also carry phantom power (a 12v or 14v electrical charge), which condenser microphones often require. NOTE: While an XLR cable has three wires, it is designed to carry one signal. Therefore, stereo audio requires two 3-pin XLR cables. Alternatively, a 5-pin XLR cable can transmit a stereo audio signal over a single cable. Updated September 6, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/xlr Copy

XML

XML Stands for "Extensible Markup Language." XML is a markup language and file format that defines and stores arbitrary sets of data in a standardized way. Like HTML, XML uses tags to contain and structure data; unlike HTML, XML does not predefine these tags. Instead, authors can define their own XML tags to specify the type of content. Like HTML files, XML files are stored as plain text. A properly-formatted XML file is both human-readable and machine-readable. While XML refers to itself as a markup language, it can also be considered a "metalanguage" that defines other markup languages for specific purposes. Custom XML tags used to structure data can define whatever the author needs them to define. XML files are very flexible and used in many contexts — RSS feeds, SVG graphics, and Microsoft Office documents are all file types built on the structure of XML. The structure of an XML file is very similar to an HTML file. While an HTML file uses predefined tags that determine how information is displayed, an XML file uses custom tags to show how data is structured. Below is an example of an XML file, demonstrating how to store a recipe as XML. Ice Cream Sundae A simple ice cream treat     Vanilla Ice Cream   1 cup       Chocolate Syrup   2 Tbsp    Scoop ice cream into bowl and drizzle with chocolate syrup. Enjoy! Instead of using predefined tags, like in HTML, the XML format allows custom tags to define whatever data is needed. In this example, the recipe tag encompasses an entire recipe; the name and description tags define some general information about it; the ingredients tag contains the individual ingredients; each ingredient has tags for both the name of that item and the quantity; finally, the instructions tag contains each step required to make the recipe. Updated November 14, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/xml Copy

XMP

XMP Stands for "Extensible Metadata Platform." XMP is a universal metadata format developed by Adobe Systems and standardized by the International Organization for Standardization (ISO). It provides a standard format for sharing metadata between multiple applications. Many file formats include internal metadata, which describes the contents of the file. For example, JPEG images created by digital cameras typically include EXIF data, which defines the shutter speed, aperture, focal length, flash setting, and other information related to the image captured by the camera. While EXIF is a universally recognized format, other types of metadata are saved in proprietary formats that can only be read by a specific program. This makes it difficult or impossible for other programs to process the metadata. Adobe's XMP specification provides a solution to this problem by outlining a standard format for storing metadata. It uses standard XML and RDF syntax to define metadata properties and attributes. Below is an example of XMP text:       …   The "extensible" part of XMP means it can be used to describe any type of data. However, several specific XMP schemas have been developed to store metadata for common file types. Examples include PDFs, Photoshop documents, compressed images, and camera raw images. Each schema uses a predefined namespace, which specifies property names. For example, the PDF schema includes properties such as pdf:Keywords, pdf:PDFVersion, and pdf:Producer. XMP data is usually stored within an existing file, such as a Photoshop document. However, it can also be saved externally as a separate .XMP file that is associated with a specific document. Since the data is saved in a standard format, XMP files can be read by any program that supports XMP. File extension: .XMP Updated August 8, 2013 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/xmp Copy

XSLT

XSLT Stands for "Extensible Stylesheet Language Transformations." XSLT is a language designed to transform XML documents into other formats. XSLT documents are called stylesheets and contain instructions for translating an XML input file into another format. An XSLT processor program uses stylesheets to analyze an XML file and convert its tags and data into another XML format, or into file formats like HTML and plain text documents. Not all XML files use the same markup tags for the same data. Unlike HTML, which uses universal definitions to standardize every tag for a specific purpose, the creator of an XML format can create arbitrary tags to contain and structure data. XSLT stylesheets contain instructions that translate tags and data structures when run through an XSLT processor. XSLT can help convert XML files created for one program into XML files compatible with another, or restructure them into files intended for display in other ways. An XML file and XSLT stylesheet run through a XSLT processor to produce an output file For example, if you have two recipe manager apps that save recipes using XML, they may use different XML tags to denote ingredients and instructions. You could write an XSLT stylesheet that converts one format to another by translating tags and changing how the file structures its data. You could also create an additional stylesheet that converts an XML recipe into an HTML file that you can view in a web browser or send to your printer. The first release of the XSLT specification can transform XML files into other XML files, HTML pages, and plain text documents. The second release, XSLT 2.0 (2007), added new features that allow an XSLT processor to modify data strings within an XML file using regular expressions, as well as new functions and operators for mathematically manipulating date and time data. It also introduced the ability to split XML input into multiple output files that contain separate aspects of the source. The most recent release, XSLT 3.0 (2017), improved support for extremely large inputs — allowing an XML processor to start translating and outputting data before the input file is fully loaded. It also added support for using JSON files as input. Updated December 15, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/xslt Copy

Y2K

Y2K Stands for "Year 2000." Y2K, most commonly associated with the "Y2K bug," was a programming issue that sparked widespread concern as the year 1999 turned to 2000. The bug stemmed from a common practice in early computer programming: storing years as two digits (e.g., "80" for 1980) to save memory. This design choice meant some systems would misinterpret "00" as 1900 instead of 2000, potentially causing date-related errors. Fears surrounding the Y2K bug were far-reaching, as engineers predicted catastrophic failures in critical systems like power grids, financial institutions, transportation networks, and government infrastructure. However, a concerted global effort by programmers in the late 1990s successfully mitigated most risks. Software developers updated legacy systems using two main strategies: Expanding Date Fields: Changing the year to include four digits instead of two (e.g., "1980" instead of "80"). Date Windowing: In rare instances where it was not possible to alter the date data type, developers implemented a temporary fix that assigned a "pivot year," ensuring dates were correctly interpreted within a predefined range. For instance, a pivot year of 30 would treat "00–29" as 2000–2029 and "30–99" as 1930–1999. When January 1, 2000 arrived, the anticipated chaos failed to materialize. Thanks to extensive preparation, only minor glitches occurred, such as incorrect timestamps on receipts. The lingering issues were quickly resolved, allowing society to transition smoothly into the new millennium. Decades later, Y2K remains a case study in proactive risk management, demonstrating how preparation — and collaboration — can avert potential crises. It also serves as a reminder of the importance of forward-thinking design in technology. Updated November 21, 2024 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/y2k Copy

Yahoo!

Yahoo! Yahoo! is one of the Internet's leading search engines. It is also the largest Web portal, providing links to thousands of other websites. These links include sites from the Yahoo! Directory as well as news stories that are updated several times a day. Besides being a portal and search engine, Yahoo! offers several other services as well. Some of these services include: Yahoo! Finance - stock quotes and financial information Yahoo! Shopping - online retail and price comparison services Yahoo! Games - online games playable over the Internet Yahoo! Groups - organized discussions among Internet users Yahoo! Travel - travel information and booking services Yahoo! Maps - maps and directions Yahoo! Messenger - instant messaging Yahoo! Mail - free Web-based e-mail To learn more about Yahoo! services, visit the Yahoo! home page. Updated in 2006 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/yahoo Copy

Yobibyte

Yobibyte A yobibyte (YiB) is a unit of data storage equal to 280 bytes, or 1,208,925,819,614,629,174,706,176 bytes. It measures data at the same scale as a yottabyte, but has a binary prefix instead of a decimal prefix. A yobibyte is slightly larger than a yottabyte (YB), which is 1024 bytes (1,000,000,000,000,000,000,000,000 bytes); a yobibyte is roughly 1.2089 yottabytes. A yobibyte is 1,024 zebibytes, and is currently the largest defined binary storage unit. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a measurement could mean different numbers. When computer engineers first began using the term kilobyte to refer to a binary measurement of 210 bytes (1,024 bytes), the difference between binary and decimal measurements was roughly 2%. As file sizes and storage capacities expanded, so did the difference between the two types of measurements — a yobibyte is more than 20% larger than a yottabyte. Using different terms to refer to binary and decimal measurements helps to address the confusion. NOTE: For a list of other units of measurement, view this Help Center article. Updated November 22, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/yobibyte Copy

Yosemite

Yosemite Yosemite is the name of Apple's eleventh version of OS X, released on October 16, 2014. It follows Mavericks (OS X 10.9) and is also called OS X 10.10. While it is normal for each new version of OS X to include slight modifications to the user interface, Yosemite presented the most drastic change in several years. The interface includes many translucent elements, in which objects are semi-transparent. It includes a more "flat" design than earlier versions of OS X with new icons and window styles that do not have a glossy finish. While the design is not as flat is iOS 8, Yosemite's updated style makes OS X and iOS more similar in appearance. Besides altering the OS X interface to be more like iOS, Apple designed Yosemite to be more integrated with iOS as well. For example, the updated AirDrop feature allows you to share files directly with nearby iOS devices. Handoff allows you to work on something on your Mac and then resume work on your iOS device, and vice versa. The improved Messages app allows you to send and receive both iMessages and SMS messages using your Mac instead of your smartphone. You can even send an receive calls with your Mac by relaying them through your iPhone. Yosemite also includes significant updates to OS X's traditional bundled apps, such as Safari and Mail. For example, Safari includes a built-in Share menu for sharing links on social media and provides its own search suggestions when typing keywords in the address bar. Mail allows you to mark up attachments before sending them, and includes Mail Drop, a service that allows you to send attachments email attachments up to 5 GB in size. In Yosemite, iCloud Drive is built into the Finder, which allows you to manage your iCloud documents the same way as your local files. Like Mavericks, OS X Yosemite is available as a free upgrade from the Mac App Store. It supports most Mac models released in mid-2007 or later. Updated December 31, 2014 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/yosemite Copy

Yottabyte

Yottabyte A yottabyte is 1024 or 1,000,000,000,000,000,000,000,000 bytes. One yottabyte (abbreviated "YB") is equal to 1,000 zettabytes and is the largest SI unit of measurement used for measuring data. It is slightly smaller than its IEC-standardized counterpart, the yobibyte, which contains 1,208,925,819,614,629,174,706,176 bytes (280) bytes. A single yottabyte contains one septillion, or one trillion times one trillion bytes, which is the same as one trillion terabytes. It is also a number larger than any human can comprehend. There is no need for a unit of measurement larger than a yottabyte because there is simply no practical use for such a large measurement. Even all the data in the world consists of just a few zettabytes. NOTE: View a list of all the units of measurement used for measuring data storage. Updated December 15, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/yottabyte Copy

YouTube

YouTube YouTube is a video sharing service that allows users to watch videos posted by other users and upload videos of their own. The service was started as an independent website in 2005 and was acquired by Google in 2006. Videos that have been uploaded to YouTube may appear on the YouTube website and can also be posted on other websites, though the files are hosted on the YouTube server. The slogan of the YouTube website is "Broadcast Yourself." This implies the YouTube service is designed primarily for ordinary people who want to publish videos they have created. While several companies and organizations also use YouTube to promote their business, the vast majority of YouTube videos are created and uploaded by amateurs. YouTube videos are posted by people from all over the world, from all types of backgrounds. Therefore, there is a wide range of videos available on YouTube. Some examples include amateur films, homemade music videos, sports bloopers, and other funny events caught on video. People also use YouTube to post instructional videos, such as step-by-step computer help, do-it-yourself guides, and other how-to videos. Since Google offers revenue sharing for advertisement clicks generated on video pages, some users have been able to turn YouTube into a profitable enterprise. While YouTube can serve a business platform, most people simply visit YouTube for fun. Since so many people carry digital cameras or cell phones with video recording capability, more events are now captured on video than ever before. While this has created an abundant collection of entertaining videos, it also means that people should be aware that whatever they do in public might be caught on video. And if something is recorded on video, it just might end up on YouTube for the whole world to see. Updated October 7, 2009 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/youtube Copy

Zebibyte

Zebibyte A zebibyte (ZiB) is a unit of data storage equal to 270 bytes, or 1,180,591,620,717,411,303,424 bytes. Like kibibyte (KiB) and mebibyte (MiB), zebibyte has a binary prefix to remove any ambiguity when compared to the multiple possible definitions of a zettabyte. A zebibyte is slightly larger than a zettabyte (ZB), which is 1021 bytes (one sextillion, or 1,000,000,000,000,000,000,000 bytes); a zebibyte is roughly 1.18 zettabytes. A zebibyte is 1,024 exbibytes, and 1,024 zebibytes make up a yobibyte. Due to historical naming conventions in the computer industry, which used decimal (base 10) prefixes for binary (base 2) measurements, the common definition of a particular measurement could mean different numbers. When computer engineers first began using the term kilobyte to refer to a binary measurement of 210 bytes (1,024 bytes), the difference between binary and decimal measurements was roughly 2%. As file sizes and storage capacities expanded, so did the difference between the two types of measurements — a zebibyte is roughly 18% larger than a zettabyte. Using different terms to refer to binary and decimal measurements helps to address the confusion. NOTE: For a list of other units of measurement, view this Help Center article. Updated February 6, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/zebibyte Copy

Zero Day Exploit

Zero Day Exploit A zero-day exploit is a type of computer attack that exploits a software vulnerability before that vulnerability is known to the public or the software's developer. Zero-day exploits are very valuable to hackers and cybercriminals because they are unknown — all systems running the affected software are vulnerable, so the attack is likely to succeed. The term "zero-day" refers to the idea that the developer has had zero days between learning of the vulnerability and the attack occurring. After hackers discover a new vulnerability, they create a virus or piece of malware to exploit it. The exact methods depend on the nature of the security hole — some exploits can attack a vulnerable web browser when it loads a specific webpage, while others require a social engineering attack to trick someone into installing a trojan horse on a vulnerable computer. Once a hacker has access to a system, they can install additional malware, steal valuable data, or disrupt a service that the system provides. Since the exploit is unknown to its victims, a hacker who can cover their tracks can often maintain this unauthorized access for some time completely undetected. Hackers who discover a new vulnerability may often choose to exploit it themselves or sell the details to other hackers and cybercrime groups on the black market. On the other hand, white-hat hackers who find a new vulnerability often report it to the software vendor for a bug bounty. Software developers can only issue a hotfix to patch a security hole after the zero-day exploit is publicly known. Antivirus software definitions will also only be updated to catch exploits after they've been revealed. It's wise to maintain frequent data backups in case a zero-day exploit is used to attack your computer and modify or delete your data, and to make sure that you regularly install security patches as they are issued to close security holes as soon as possible. Updated October 12, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/zerodayexploit Copy

Zettabyte

Zettabyte A zettabyte is 1021 or 1,000,000,000,000,000,000,000 bytes. One zettabyte (abbreviated "ZB") is equal to 1,000 exabytes and precedes the yottabyte unit of measurement. Zettabytes are slightly smaller than zebibytes, which contain 1,180,591,620,717,411,303,424 (270) bytes. A single zettabyte contains one sextillion bytes, or one billion terabytes. That means it would take one billion one terabyte hard drives to store one zettabyte of data. Because the zettabyte unit of measurement is so large, it is only used to measure large aggregate amounts of data. Even all the data in the world is estimated to be only a few zettabytes. NOTE: View a list of all the units of measurement used for measuring data storage. Updated December 15, 2012 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/zettabyte Copy

ZIF

ZIF Stands for "Zero Insertion Force." A ZIF socket is a type of socket found on a computer motherboard that allows the easy insertion and removal of a processor. It uses a small lever next to the socket to gently insert the processor, minimizing the risk of bending or damaging its pins. Installing a chip into a ZIF socket generally requires no additional tools. All modern CPU sockets using a Pin Grid Array (PGA) package are ZIF sockets; Land Grid Array (LGA) sockets don't insert pins into the socket at all. Before inserting a chip into a ZIF socket, lift the lever next to the socket upwards. The contacts within the socket open up, allowing the pins on the underside of the chip to fit easily into the socket. After pushing the lever back down to close the socket, the contacts close around the pins. The lever is then locked into place to keep the CPU chip secure. Before the introduction of ZIF sockets, inserting a processor into a socket required a significant amount of force — the installer had to push each pin on the underside of the chip into place. Chips with more pins require more force. Using too much force, or applying it unevenly, could bend or break pins and render the CPU unusable. Updated November 2, 2022 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/zif Copy

Zip

Zip Zip is a common type of file compression. "Zipping" one or more files creates a compressed archive that takes up less disk space than the uncompressed version. It is useful for backing up files and reducing the size of data transferred over the Internet. An archive compressed with standard Zip compression has a .ZIP file extension — for example, Archive.zip. To open the file or files in a Zip archive, you must first "unzip" or decompress the archive. Both Windows and macOS include a built-in file decompression utility that can unzip Zip files. Alternatively, you can use a third-party file archiving application, such as WinZip or 7-Zip, to decompress a Zip file. Zip Compression Ratio How much a file can be compressed depends on the original data. For example, a plain text file can be compressed much more than a JPEG image file, since JPEG data is already compressed. Zipping a text file might generate an archive that is only 25% of the original file size, while a zipped JPEG file may still be 95% of its original size. NOTE: "Zip" also refers to a removable storage device made by Iomega during the 1990s and early 2000s. The original Zip drive supported 100 megabyte removable Zip disks — significantly more than the 1.44 MB floppy disk alternative. Later models supported 250 and 750 MB disks. The popularity of Zip disks faded as hard drive costs declined and the storage capacity of portable flash drives increased. File extensions: .ZIP, .ZIPX Updated September 30, 2019 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/zip Copy

Zip Bomb

Zip Bomb A zip bomb is a compressed file that consumes a massive amount of storage space when decompressed. When a zip bomb is opened, it can quickly take up all the free space on a storage device. Most Zip files have a compression ratio between 2:1 and 10:1. For example, a 3 megabyte compressed .ZIP file might expand to 15 megabytes. A zip bomb, on the other hand, may have a compression ratio of over 1,000,000:1. A small 40 kilobyte zip bomb may expand to over 5 gigabytes. A 10 megabyte zip bomb may expand to over 280 terabytes. Zip bombs achieve astronomic compression ratios using one of two methods: Recursive compression Overlapping files Recursive compression, which is the most common way to create a zip bomb, stores layers of compressed files in a single archive. When the primary archive is decompressed, it recursively expands the nested archives. As multiple layers of compressed files are opened, the output grows exponentially. A zip bomb may also be created using overlapping files. Instead of storing layers of compressed archives, the archive contains multiple directory headers that point to a single file. By "overlapping" files within the archive, it can exceed the maximum compression ratio of 1,032. Zip bombs are considered malware because they can be used for malicious purposes. For instance, the rapid decompression may use 100% of system resources, rendering the computer unresponsive. Fortunately, most antivirus programs can detect zip bombs and warn against opening them. If you unknowingly open a zip bomb on your PC, most Zip utilities allow you to stop the decompression in mid-process. Updated February 2, 2021 by Per C. APA MLA Chicago HTML Link https://techterms.com/definition/zip_bomb Copy

Zone File

Zone File A zone file is a plain text file stored on a name server that contains information about the domains within that name server's zone. It includes contact information about that name server's administrator and lists each domain and subdomain alongside its corresponding IP address. Zone files are a critical part of the Domain Name System infrastructure, allowing computers to locate servers on the Internet at all levels of the DNS system hierarchy. The DNS system delegates portions of the larger namespace into zones, which may be subdivided into smaller zones. The largest zones cover the entirety of each top-level domain (like .com and .net). Within those are smaller zones managed by other nameservers, which may consist of a single domain (example.com) and its subdomains (mail.example.com) or cover a larger group of multiple domains. Each name server maintains a zone file for the domains within its zone and the corresponding IP address for each domain and subdomain. When another computer on the Internet makes a request to that name server, it checks the zone file and returns the requested domain's IP address. An example zone file showing directives, a Start of Authority, and DNS records Zone files store several important pieces of information about a domain. Each zone file begins with one or more directives — lines of text that start with a '$' character and set instructions for the name server to follow, like specifying a TTL expiration timer or an origin domain for relative domain names to follow. Every zone file needs to include a Start of Authority (SOA) record that lists the email address of the name server's administrator, an address for an authoritative master DNS server, and settings for how and when to refresh the zone file's contents. The bulk of a zone file consists of DNS records for domain names and subdomains, matching a specific domain name with the corresponding server's IP address. Updated December 11, 2023 by Brian P. APA MLA Chicago HTML Link https://techterms.com/definition/zone_file Copy