کمپیوتر

کمپیوتر

از تظر   لغت : کمپیوتراز کلمه لاتین گرفته شده که به معنی حساب کردن میباشد پس کمپیوتر به معنی محاسب یا حساب کننده افاده میگردد. کمپیوتر یک ماشین برقی یا الکترونیکی electronic میباشد که قادر به عملیات ذهنی و دماغی در وقت کم میباشد و میتواند معلومات یا information or data را در خود ثبت یا  save نماید و دوباره به دسترس ما قراربدهد وبصورت عموم کمپیوتر دارای دو بخش عمده و اساسی میباشد .

1 Soft ware:  نرم افزار

2. .Hard ware:  سخت افزار

1.    : Soft wareیعنی پروگرام ها، به این معنی که تنها میتوان آنرا مشاهده کرد بلکه نمیتوان به آن تماس گرفت.

 

Hard ware: 2: آن بخش کمیوتر را گویند که قابلیت تماس را دارا بوده و قابل دید می باشد.                                                          

 

 

 

تاریخچه کمپیوتر

 

قبل از آنکه مفکوره کمپیوتر به میان بیاید مردم چین در زمان قدیم برای محاسبه ها از چوتها و یا Abacus  استفاده مینمودند به همین ترتیب با گذشت زمان مردم امریکا تا سر شماری قاره خویش را آغاز نمایند لیکن شمارش یک شهر که New York  نام دارد مدت طولانی را در بر گرفت در این وقت یک عالم انگلیسی بنام Charles طی سالهای 1792 الی1871   در پوهنتون Cambridge  ماشین را طرح نمود و نام آنرا Different Engine  گذاشت جنگ جهانی دوم باعث تکامل جدی وسایل و تکنالوجی گردید.

گروه مشخص از کمپنی      IBM (International Business Machine) در سال 1946  ماشین حساب برقی بزرگتر را ساختن بنام Mark Harvard I  یاد میشود و همجنان در همین سال ماشین حساب الکترونیکی دیگر که اساس تکنالوجی آنرا تیوب های برقی تشکیل میداد ساخته شد که تاریخ انکشاف کمپیوتر را به نسل های بعدی کمپیوتر ارتباط میدهد .

تا به حال چهار نوع کمپیوتر ها تکمیل شده است که مشخصات هر نوع به صورت جداگانه مربوط به اساس تکنالوجی کمپیوتر میباشد .(تیوب, ترانزستور( LIC , IC,  .

 

اساس تکنالوجی نوع اول کمپیوتر ها را تیوب های برقی تشکیل میداد که طی سال های 1953-1906  از این کمپیوتر استفاده میشداین کمپیوتر دارای حجم و وزن نهایت زیاد بود و مصرف انرژی قوی داشت از طرف دیگر سرعت عملیات حسابی و حافظهء شان قناعت بخش نبود.

مرحله جدید در انکشاف کمپیوتر تولید کمپیوتر های بود که اساس تکنالوجی آنرا نیمه هادی ها و وسائل مقناطیسی تشکیل میداد . این مرحله انکشاف کمپیوتر ها را به نوع دوم کمپیوتر ها ارتباط میدهند.

در اواخر دهه 50 و اوایل دهه 60  ترانزستور ها که نسبت به تیوب های الکترونیکی دارای وزن ، حجم و مصرف انرژی کم و سرعت اجرای عمل و مدت حیات آن قناعت بخش بود تولید گردید .

نوع سوم کمپیوتر ها در اواخر دهه 60 روی کار گرفته شد که اساس تکنالوجی آنرا IC (Integrated Circuit) تشکیل میداد. حجم و وزن IC نسبت به ترانزستور ها به مراتب کم بوده و دارای مدت حیات زیاد میباشد . با پیدا شدن نوع سوم کمپیوتر ها امکان آن میسر گردید تا کمپیوتر ها با هم مرتب گردند .

در سال 1975  تحول بزرگی در تخنیک صنعت کمپیوتر رونما گردید که در این سال Micro Processor  ساخته شد که اساس تکنالوجی آنرا Large Integrated Circuit , LIC   تشکیل میداد که این نوع چهارم کمپیوتر ها میباشد.

کمپیوتر های نوع چهارم قدرت اجرای صد ملیون عملیات را فی سانیه می باشند.

 

Digital Computers

 

در این نوع کمپیوتر ها که فعلاً استفاده زیاد از آنهاصورت میگیرد کار ذریعه منطق های ریاضی را صورت میپذیرذ یعنی کار ذریعه اعداد باندری انجام میپذیرد .

کمپیوتر های نوع دیجیتل را نظر به ظرفیت کار شان به چهار دسته تقسیم کرده اند .

                                                                              1. Micro Computers

2. Mini Computers

3. Mainframe Computers

4. Super Computers

                                                                                                            . Micro Computer .1  

یکی از مشهورترین کمپیوتر دیجیتل در جهان بوده و این نوع کمپیوتر ها حجم کوچک داشته به آسانی قابل انتقال میباشد و به نام کمپیوتر جیبی نیز مسمی میباشد و این کمپیوتر به نام PC  کمپیوتر نیز یاد میشود یعنیPersonal Computer

 

 (Personal Digital Assistance)PDA

 

1.    Lap Top Computer.

2.    Palm Top Computer.

3.    Disk Top Computer.

4.    Hand Held Computer.

 

 

کمپیوتر Palm Top .

این نوع کمپیوتر ها بالای کف دست گذاشته و بعداً استفاده میشود که خصوصاً از این نوع کمپیوتر ها در بخش استخبارات زیاد استفاده میشود.

 البنته حافظه دایمی ندارد یعنی Hard Disk ندارد مگرتمام برنامه ها در (Rom) ثبت است وهر وختیکه بخواهیم یک برنامه را باز کنیم پس این کمپیوتر از حافظه برقی خود یعنی از (Ram)  استفاده می نماید

و سابق در کمپیوتر پام تاپ از Windows (CE) کارگرفته میشود.

 

کمیپیوتر Hand Held  .

این کمپیوتر یک نوع کمپیوتر Micro می باشد که مانند Palm Top بدست گرفته می شود

و بعدا" از آن استفاده می شود.

 

کمپیوتر Lap Top .

 

این نوع کمپیوتر ها همچنان نوع از کمپیوتر Micro می باشد که بالای ران پای گذاشته میشود

این کپیوتر ها دارای Charger مخصوص می باشد که همین حلت میتوانیم که این کمپیوتر را از یک جا به جای دیگر انتقال بدهیم یعنی این کمپیوتر را بنام Portable Computer یاد میکند.

 

کمپیوتر Desk Top.این کمپیوتر ها نوع از کمپیوتر Micro می باشد که باید در بک جای معین گذاشته شود و  به آن Power برق وصل باشد. و بسیار به مشکل می توانیم انرا از یکجا به جای دیگر انتقال دهیم

بنا" Portable نمی باشد.

 

. Mini Computer .2

 

این  کمپیوتر ها نوع از کمپیوتر های Digital بوده و نسبت به کمپیوتر Micro بزدگتر می باشد و همچنان حافظه اش نظر به کمپیوتر Micro زیاد تر و قوی تر می باشد.و به تعداد  1الِِی 16 نفر می تواند از این استفاده کند.

 

: Mainframe Computers .3 

 

 این نوع کمپیوتر ها دارای سرعت عمل زیاد بوده فعالیت های مهم و زیاد را انجام میدهد . در شفاخانه ها ، مراکز تحقیقاتی ، لابراتوار ها ، پیش بینی اوضاع جوی ، هوا شناسی ، صفینه های فضائی ، سیستم مخابراتی ، نفوس شماری و غیره مراکز عام المنفعه از این نوع کمپیوتر ها استفاده زیاد بعمل میآید .

 

: Super Computers .4 

این نوع کمپیوتر ها دارای قیمت زیاد بوده و فعالیت های مغلق را انجام میدهد که سیستم اقمار مصنوعی ، سیستم بانکها ، انکشافات علمی ، تحقیقات هستوی ، مسائل اقتصادی و غیره قسمت ها میتوان از این نوع کمپیوتر ها استفاده موثر نمود.

By: MOA

مهمترین میانبر ها در ویندوز

مهمترین میانبر ها در ویندوز 

ویندوز هفت تعداد زیادی میانبر های صفحه کلید معرفی کرده است که برای کسانیکه که زیاد با کامپیوتر سر و کار دارند و می خواهند سریع تر کار های خود را انجام دهند می تواند بسیار مفید باشد از طرفی تعداد زیاد این میانبر ها هم خود مشکل ساز شده است و پیدا کردن میانبر های مفید کمی سخت شده است. به همین منظور لیستی از مهمترین آنها را در اختیار خوانندگان عزیز یاد بگیر دات کام قرار می دهیم.

 

میانبر های راهبردی ویندوز

1.    [Win+M]همه پنجره های باز را مخفی می کند

2.    [Win+Shift+M]همه پنجره های مخفی را باز می کند

3.    [Win+D] :- نمایش دسکتاپ

4.    [Windows+Up]پنجره را به بزرگترین اندازه نمایش می دهد

5.    [Windows+Down]پنجره را مخفی می کند /دوباره نمایش می دهد

6.    [Windows+Left]پنجره را به سمت چپ هدایت می کند

7.    [Windows+Right]پنجره را به سمت راست هدایت می کند

8.    [Windows+Shift Up]پنجره را از طول به میزان حداکثر باز می کند

9.    [Windows+Shift Down]طول پنجره را به اندازه قبلی باز می گرداند

10.                       [Windows+Shift Left]پنجره را به مانیتور سمت چپ هدایت می کند

11.                       [Windows+Shift Right]پنجره را به مانیتور سمت راست هدایت می کند

12.                       [Win+Spacebar]یک لحظه دسکتاپ را نمایش می دهد

13.                       [Win+Home]همه پنجره ها را مخفی می کند دوباره در بزرگ می کند

14.                       [Alt+F4] - پنجره فعال را می بندد

15.                       [Alt+Tab] - می توانید با این میانبر بین پنجره ها و برنامه های فعال ویندوز جابجا شوید

16.                       [Alt+Esc]بین همه پنجره های فعال می چرخد

17.                       [Win+Tab]- چرخش سه بعدی

 میانبر های مربوط بهTaskbar

1.    [Win+Any number (1, 2, 3, .., 0)]با توجه به شماره برنامه موجود در Taskbar را اجرا می کند

2.    [Ctrl+کلیک روی یکی از آیتم های Taskbar  ]می توانید از بین آنها برنامه مورد نظر خود را باز کنید

3.    [Shift+کلیک روی یکی از آیتم های Taskbar] -پنجره جدیدی از برنامه مورد نظر را باز می کند.

4.    [Ctrl+Shift+کلیک روی یکی از آیتم های Taskbar]پنجره جدیدی را به عنوان کاربر اصلی باز می کند

5.    [Shift+کلیک سمت راست روی ایکون]منوی معروف restore minimize ,.. را باز می کند

6.    [Win+T]بین برنامه های موجود در Taskbar  می چرخد

7.    [Win+Shift+T]معکوس عمل بالا را انجام می دهد

8.    [Win+R] - پنجره RUN را باز می کند

 

عمومی

1.    [Win+P]نمایش وضعیت حالت ارائه سمینار و انتخاب های بیشتر برای آن

2.    [Win+G]نمایش ابزار های (gadgets) دسکتاپ

3.    [Win+L]قفل کردن کامپیوتر

4.    [Win+X] - مرکز حمل و نقل

5.    [Win++] - بزرگنمایی

6.    [Win+-]کوچکنمایی

7.    [Win+=]درشتنمایی

 

میانبر های مربوط به Windows Explorer

1.    [Alt+P] - نمایش و مخفی کردن حالت شیشه ای

2.    [Alt+Up] - در صفحات یک پله بالاتر می رود

3.    [Alt+Left/Right] - جلو و عقب رفتن در صفحات

 

 

                                                                   www.OsmanArrib.blogfa.com

 

by: MOA

What is Operating system?i

Operating systemAn operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component of the system software in a computer system. Application programs usually require an operating system to function.

Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.

Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.Contents [hide]
1 Types of operating systems
2 History
2.1 Mainframes
2.2 Microcomputers
3 Examples of operating systems
3.1 UNIX and UNIX-like operating systems
3.1.1 BSD and its descendants
3.1.1.1 OS X
3.1.2 Linux and GNU
3.1.2.1 Google Chrome OS
3.2 Microsoft Windows
3.3 Other
4 Components
4.1 Kernel
4.1.1 Program execution
4.1.2 Interrupts
4.1.3 Modes
4.1.4 Memory management
4.1.5 Virtual memory
4.1.6 Multitasking
4.1.7 Disk access and file systems
4.1.8 Device drivers
4.2 Networking
4.3 Security
4.4 User interface
4.4.1 Graphical user interfaces
5 Real-time operating systems
6 Operating system development as a hobby
7 Diversity of operating systems and portability
8 See also
9 References
10 Further reading
11 External links

[edit]
Types of operating systems This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2012)

Real-time
A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
Multi-user
A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time.
Multi-tasking vs. single-tasking
A multi-tasking operating system allows more than one program to be running at a time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive and co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions of both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking.
Distributed
Further information: Distributed system
A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.
Embedded
Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems.
[edit]
History
Main article: History of operating systems
See also: Resident monitor

Early computers were built to perform a series of single tasks, like a calculator. Operating systems did not exist in their modern and more complex forms until the early 1960s.[4] Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Hardware features were added that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating system were made for them similar in concept to those used on larger computers.

In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).

OS/360 was used on most IBM mainframe computers beginning in 1966, including the computers that helped NASA put a man on the moon.

In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period of time and would arrive at a scheduled time with program and data on punched paper cards and/or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.[4]

Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and generating computer code from human-readable symbolic code. This was the genesis of the modern-day computer system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.[citation needed]
[edit]
Mainframes
Main article: Mainframe computer
See also: History of IBM mainframe operating systems

Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.

During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and applications written for OS/360 can still be run on modern machines.[citation needed]

OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system.

The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).

Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.

UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.

General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS).

Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.

In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.

The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
Burroughs MCP – B5000, 1961 to Unisys Clearpath/MCP, present.
IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.
[edit]
Microcomputers

PC-DOS was an early personal computer OS that featured a command line interface.

Mac OS by Apple Computer became the first widespread OS to feature a graphical user interface. Many of its features such as windows and icons would later become commonplace in GUIs.

The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the '80s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.

The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.

The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.
[edit]
Examples of operating systems
[edit]
UNIX and UNIX-like operating systems

Evolution of Unix systems
Main article: Unix

Unix was originally written in assembly language.[5] Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewriten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History).

The UNIX-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.

Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.

Four operating systems are certified by the The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris Operating System can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's Mac OS X, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.

Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
[edit]
BSD and its descendants

The first server for the World Wide Web ran on NeXTSTEP, based on BSD.
Main article: Berkeley Software Distribution

A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The world wide web was also first demonstrated on a number of computers running an OS based on BSD called NextStep.

BSD has its roots in Unix. In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.

Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.

Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NetBSD.
[edit]
OS X
Main article: OS X

The standard user interface of Mac OS X

Mac OS X is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997. The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0 "Cheetah") following in March 2001. Since then, six more distinct "client" and "server" editions of Mac OS X have been released, the most recent being OS X 10.8 "Mountain Lion", which was first made available on February 16, 2012 for developers, and was then released to the public on July 25, 2012. Releases of Mac OS X are named after big cats.

The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. In Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version.[6]
[edit]
Linux and GNU
Main articles: GNU, Linux, and Linux kernel

Ubuntu, desktop Linux distribution

Android, a popular mobile operating system using the Linux kernel

Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest that Linux is used on 1.82% of all personal computers,[7][8] it has been widely adopted for use in servers[9] and embedded systems[10] (such as cell phones). Linux has superseded Unix in most places[which?], and is used on the 10 most powerful supercomputers in the world.[11] The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android.

The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to Unix but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts with the Linux kernel in order to create a full-fledged operating system.
[edit]
Google Chrome OS
Main article: Google Chrome OS

Chrome is an operating system based on the Linux kernel and designed by Google. Since Chrome OS targets computer users who spend most of their time on the Internet, it is mainly a web browser with no ability to run applications. It relies on Internet applications (or Web apps) used in the web browser to accomplish tasks such as word processing and media viewing, as well as online storage for storing most files.
[edit]
Microsoft Windows
Main article: Microsoft Windows

Bootable Windows To Go USB flash drive

Microsoft Windows 7 Desktop

Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[8][12][13][14] The newest version is Windows 8 for workstations and Windows Server 2012 for servers. Windows 7 recently overtook Windows XP as most used OS.[15][16][17]

Microsoft Windows originated in 1985 as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[18][19] and 16 bits Windows 3.x[20] drivers. Windows Me, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture. In the past, Windows NT supported non-Intel architectures.

Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers, as Windows competes against Linux and BSD for server market share.[21][22]
[edit]
Other

There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research.

Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.
[edit]
Components

The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component.
[edit]
Kernel

A kernel connects the application software to the hardware of a computer.
Main article: Kernel (computing)

With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc.
[edit]
Program execution
Main article: Process (computing)

The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.
[edit]
Interrupts
Main article: Interrupt

Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative — having the operating system "watch" the various sources of input for events (polling) that require action — can be found in older systems with very small stacks (50 or 60 bytes) but are unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.

When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.

When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.

A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.
[edit]
Modes
Main articles: Protected mode and Supervisor mode

Privilege rings for the x86 available in protected mode. Operating systems determine which processes run in each mode.

Modern CPUs support multiple modes of operation. CPUs with this capability use at least two modes: protected mode and supervisor mode. The supervisor mode is used by the operating system's kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is written and erased, and communication with devices like graphics cards. Protected mode, in contrast, is used for almost everything else. Applications operate within protected mode, and can only use hardware by communicating with the kernel, which controls everything in supervisor mode. CPUs might have other modes similar to protected mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one.

When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS or EFI, bootloader, and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.

In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.

The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).
[edit]
Memory management
Main article: Memory management

Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory.

Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system.

Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.

In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.

Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.
[edit]
Virtual memory
Main article: Virtual memory
Further information: Page fault

Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory, called virtual memory.

The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.

If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.

When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.

In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.

"Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[23]
[edit]
Multitasking
Main articles: Computer multitasking and Process management (computing)
Further information: Context switch, Preemptive multitasking, and Cooperative multitasking

Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute.

An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.

An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.

Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well.

The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)

On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having pre-emptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals).
[edit]
Disk access and file systems
Main article: Virtual file system

Filesystems allow users and programs to organize and sort files on a computer, often through the use of directories (or "folders")

Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.

Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.

While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers.

A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices.

When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.

Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software).

Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB flash drive, or even contained within a file located on another file system.
[edit]
Device drivers
Main article: Device driver

A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.

The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view.

Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.
[edit]
Networking
Main article: Computer network

Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.

Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.

Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.
[edit]
Security
Main article: Computer security

A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.

The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.

In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.

External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information.

Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.

An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.

Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.
[edit]
User interface

A screenshot of the Bourne Again Shell command line. Each command is typed out after the 'prompt', and then its output appears below, working its way down the screen. The current command prompt is at the bottom.
Main article: Operating system user interface

Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.
[edit]
Graphical user interfaces

A screenshot of the KDE Plasma Desktop graphical user interface. Programs take the form of images on the screen, and the files, folders (directories), and applications take the form of icons and symbols. A mouse is used to navigate the computer.

Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of Mac OS, the GUI is integrated into the kernel.

While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.

Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma Desktop is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.

Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).

Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[24]
[edit]
Real-time operating systems
Main article: Real-time operating system

A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.

An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.

Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[citation needed] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b.

Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.
[edit]
Operating system development as a hobby
See also: Hobbyist operating system development

Operating system development is one of the most complicated activities in which a computing hobbyist may engage. A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers. [25]

In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests.

Examples of a hobby operating system include ReactOS and Syllable.
[edit]
Diversity of operating systems and portability

Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained.

This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.

Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.
[edit]
See alsoComparison of operating systems
Handheld computers
Hypervisor
Interruptible operating system
List of important publications in operating systems
List of operating systems
Microcontroller Computer Science portal
Information technology portal
Computer networking portal

Network operating system
Object-oriented operating system
Operating System Projects
PCjacking
System image
Timeline of operating systems
Usage share of operating systems

[edit]
References
^ Stallings (2005). Operating Systems, Internals and Design Principles. Pearson: Prentice Hall. p. 6.
^ Dhotre, I.A. (2009). Operating Systems.. Technical Publications. p. 1.
^ "Operating System Market Share". Net Applications.
^ a b Hansen, Per Brinch, ed. (2001). Classic Operating Systems. Springer. pp. 4–7. ISBN 0-387-95113-X.
^ Ritchie, Dennis. "Unix Manual, first edition". Lucent Technologies. Retrieved 22 November 2012.
^ "OS X Mountain Lion - Move your Mac even further ahead". Apple. Retrieved 2012-08-07.
^ Usage share of operating systems
^ a b "Top 5 Operating Systems from January to April 2011". StatCounter. October 2009. Retrieved November 5, 2009.
^ "IDC report into Server market share". Idc.com. Retrieved 2012-08-07.
^ Linux still top embedded OS
^ Dhotre, I.A. (2009). Operating Systems.. Technical Publications. p. 1.
^ "Operating System Market Share". Net Applications.
^ a b Hansen, Per Brinch, ed. (2001). Classic Operating Systems. Springer. pp. 4–7. ISBN 0-387-95113-X.
^ Ritchie, Dennis. "Unix Manual, first edition". Lucent Technologies. Retrieved 22 November 2012.
^ "OS X Mountain Lion - Move your Mac even further ahead". Apple. Retrieved 2012-08-07.
^ Usage share of operating systems
^ a b "Top 5 Operating Systems from January to April 2011". StatCounter. October 2009. Retrieved November 5, 2009.
^ "IDC report into Server market share". Idc.com. Retrieved 2012-08-07.
^ Linux still top embedded OS
^ Tom Jermoluk (2012-08-03). "TOP500 List – November 2010 (1–100) | TOP500 Supercomputing Sites". Top500.org. Retrieved 2012-08-07.
^ "Global Web Stats". Net Market Share, Net Applications. May 2011. Retrieved 2011-05-07.
^ "Global Web Stats". W3Counter, Awio Web Services. September 2009. Retrieved 2009-10-24.
^ "Operating System Market Share". Net Applications. October 2009. Retrieved November 5, 2009.
^ "w3schools.com OS Platform Statistics". Retrieved October 30, 2011.
^ "Stats Count Global Stats Top Five Operating Systems". Retrieved October 30, 2011.
^ "Global statistics at w3counter.com". Retrieved 23 January 2012.
^ "Troubleshooting MS-DOS Compatibility Mode on Hard Disks". Support.microsoft.com. Retrieved 2012-08-07.
^ "Using NDIS 2 PCMCIA Network Card Drivers in Windows 95". Support.microsoft.com. Retrieved 2012-08-07.
^ Dhotre, I.A. (2009). Operating Systems.. Technical Publications. p. 1.
^ "Operating System Market Share". Net Applications.
^ a b Hansen, Per Brinch, ed. (2001). Classic Operating Systems. Springer. pp. 4–7. ISBN 0-387-95113-X.
^ Ritchie, Dennis. "Unix Manual, first edition". Lucent Technologies. Retrieved 22 November 2012.
^ "OS X Mountain Lion - Move your Mac even further ahead". Apple. Retrieved 2012-08-07.
^ Usage share of operating systems
^ a b "Top 5 Operating Systems from January to April 2011". StatCounter. October 2009. Retrieved November 5, 2009.
^ "IDC report into Server market share". Idc.com. Retrieved 2012-08-07.
^ Linux still top embedded OS
^ Tom Jermoluk (2012-08-03). "TOP500 List – November 2010 (1–100) | TOP500 Supercomputing Sites". Top500.org. Retrieved 2012-08-07.
^ "Global Web Stats". Net Market Share, Net Applications. May 2011. Retrieved 2011-05-07.
^ "Global Web Stats". W3Counter, Awio Web Services. September 2009. Retrieved 2009-10-24.
^ "Operating System Market Share". Net Applications. October 2009. Retrieved November 5, 2009.
^ "w3schools.com OS Platform Statistics". Retrieved October 30, 2011.
^ "Stats Count Global Stats Top Five Operating Systems". Retrieved October 30, 2011.
^ "Global statistics at w3counter.com". Retrieved 23 January 2012.
^ "Troubleshooting MS-DOS Compatibility Mode on Hard Disks". Support.microsoft.com. Retrieved 2012-08-07.
^ "Using NDIS 2 PCMCIA Network Card Drivers in Windows 95". Support.microsoft.com. Retrieved 2012-08-07.
^ "INFO: Windows 95 Multimedia Wave Device Drivers Must be 16 bit". Support.microsoft.com. Retrieved 2012-08-07.
^ "Operating System Share by Groups for Sites in All Locations January 2009".
^ "Behind the IDC data: Windows still No. 1 in server operating systems". ZDNet. 2010-02-26.
^ Stallings, William (2008). Computer Organization & Architecture. New Delhi: Prentice-Hall of India Private Limited. p. 267. ISBN 978-81-203-2962-1.
^ Poisson, Ken. "Chronology of Personal Computer Software". Retrieved on 2008-05-07. Last checked on 2009-03-30.
^ "My OS is less hobby than yours". Osnews. December 21, 2009. Retrieved December 21, 2009.
[edit]
Further reading
Auslander, Marc A.; Larkin, David C.; Scherr, Allan L. (1981). The evolution of the MVS Operating System. IBM J. Research & Development.
Deitel, Harvey M.; Deitel, Paul; Choffnes, David. Operating Systems. Pearson/Prentice Hall. ISBN 978-0-13-092641-8.
Bic, Lubomur F.; Shaw, Alan C. (2003). Operating Systems. Pearson: Prentice Hall.
Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts. John Wiley & Sons. ISBN 0-470-12872-0.
[edit]
External links Look up operating system in Wiktionary, the free dictionary.
Wikimedia Commons has media related to: Screenshots of operating systems
Wikiversity has learning materials about Operating Systems at
Topic:Operating systems

Operating Systems at the Open Directory Project
Multics History and the history of operating systems
How Stuff Works - Operating Systems
Help finding your Operating System type and version[show]
v · t · e
Operating system

[show]
v · t · e
Systems and systems science



View page ratings
Rate this page
What's this?
Trustworthy
Objective
Complete
Well-written
I am highly knowledgeable about this topic (optional)
Submit ratings
Categories: Operating systems
American inventions
Navigation menu
Create account
Log in
Article
Talk
Read
Edit
View history

Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact Wikipedia
Toolbox
Print/export
Languages
Acèh
Afrikaans
Alemannisch
አማርኛ
العربية
Aragonés
অসমীয়া
Asturianu
Azərbaycanca
বাংলা
Bân-lâm-gú
Башҡортса
Беларуская
Беларуская (тарашкевіца)‎
Български
Bosanski
Brezhoneg
Català
Чӑвашла
Česky
Cymraeg
Dansk
Deutsch
Eesti
Ελληνικά
Español
Esperanto
Euskara
فارسی
Français
Furlan
Gaeilge
Galego
한국어
Հայերեն
हिन्दी
Hornjoserbsce
Hrvatski
Ido
Ilokano
Bahasa Indonesia
Interlingua
Íslenska
Italiano
עברית
Basa Jawa
ಕನ್ನಡ
ქართული
Kaszëbsczi
Қазақша
Kiswahili
Kurdî
Кыргызча
ລາວ
Latina
Latviešu
Lëtzebuergesch
Lietuvių
Lingála
Lumbaart
Magyar
Македонски
Malagasy
മലയാളം
मराठी
مصرى
Bahasa Melayu
Монгол
မြန်မာဘာသာ
Nederlands
नेपाल भाषा
日本語
Norsk (bokmål)‎
Norsk (nynorsk)‎
Occitan
Олык марий
ଓଡ଼ିଆ
Oʻzbekcha
پنجابی
پښتو
Plattdüütsch
Polski
Português
Qaraqalpaqsha
Ripoarisch
Română
Runa Simi
Русиньскый
Русский
Саха тыла
Sámegiella
Shqip
සිංහල
Simple English
Slovenčina
Slovenščina
Ślůnski
Soomaaliga
کوردی
Српски / srpski
Srpskohrvatski / српскохрватски
Basa Sunda
Suomi
Svenska
Tagalog
தமிழ்
Taqbaylit
Татарча/tatarça
తెలుగు
ไทย
Тоҷикӣ
Türkçe
Українська
اردو
Vèneto
Tiếng Việt
Võro
Walon
Winaray
Wolof
ייִדיש
Yorùbá
粵語
Zazaki
Žemaitėška
中文
This page was last modified on 22 December 2012 at 15:41.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Contact us
Privacy policy
About Wikipedia
Disclaimers

 Microsoft® SQL Server® 2008 Express


Download Center
Sign in

Products
Categories
Security
Support
Microsoft® SQL Server® 2008 Express



Quick links
Overview
System requirements
Instructions
Additional information
Looking for support?

Visit the Microsoft Support site now >

SQL Server 2008 Express is a free edition of SQL Server that is an ideal data platform for learning and building desktop and small server applications, and for redistribution by ISVs.
Quick detailsVersion: 10.00.1600.22 Date published: 2/8/2009
Change language:

Files in this download

The links in this section correspond to files available for this download. Download the files appropriate for you.File name Size
SQLEXPR_x64_ENU.exe 82.5 MB Download
SQLEXPR_x86_ENU.exe 89.1 MB Download
SQLEXPR32_x86_ENU.exe 61.1 MB Download

Overview

Microsoft SQL Server 2008 Express is a powerful and reliable data management system that delivers a rich set of features, data protection, and performance for embedded application clients, light Web applications, and local data stores. Designed for easy deployment and rapid prototyping, SQL Server 2008 Express is available at no cost, and you are free to redistribute it with applications. It is designed to integrate seamlessly with your other server infrastructure investments. For more information about SQL Server Express, including other versions and downloadable components now available, see Microsoft SQL Server Express.

For information about the different editions of SQL Server 2008, see the Editions page.
Top of page
System requirements

Supported operating systems: Windows Server 2003 Service Pack 2, Windows Server 2008, Windows Vista, Windows Vista Service Pack 1, Windows XP Service Pack 2, Windows XP Service Pack 3

32-Bit Systems: Computer with Intel or compatible 1GHz or faster processor (2 GHz or faster is recommended. Only a single processor is supported)
64-Bit Systems: 1.4 GHz or higher processor (2 GHz or faster is recommended. Only a single processor is supported)
Minimum of 256 MB of RAM (1 GB or more is recommended)
1 GB of free hard disk space

Connecting to Visual Studio 2005 requires downloading and installing Visual Studio 2005 Support for SQL Server 2008, Community Technology Preview.

Please read important information in the Release Notes before installing SQL Server 2008 with Visual Studio 2008.

To learn more about what is required to run SQL Server 2008 Express, see the system requirements page.

Top of page
Instructions

Note: You must have administrative rights on the computer to install SQL Server 2008 Express.

We recommend that you read the Release Notes and Readme before installing SQL Server 2008 Express.

Step 1: Download and install Microsoft .Net Framework 3.5 SP1.

Step 2: Download and install Windows Installer 4.5.

Step 3: Download SQL Server 2008 Express by clicking the appropriate link later on this page. To start the installation immediately, click Run. To install SQL Server Express at a later time, click Save.

Note: Note: SQL Server 2008 Express includes both 32-bit and 64-bit versions. SQLEXPR32_x86 is a smaller package that can be used to install SQL Server 2008 Express onto only 32-bit operating systems. SQLEXPR_x86 is the same product but supports installation onto both 32-bit and 64-bit (WoW) operating systems. SQLEXPR_x64 is a native 64-bit SQL Server 2008 Express and supports installation onto only 64-bit operating systems. There is no other difference between these packages.


Top of page
Additional information

SQL Server 2008 Express is available for x86 and x64 systems. SQL Server 2008 Express is not supported on IA64 systems.


See SQL Server Books Online for detailed information on installing and using SQL Server 2008 Express.


If you have questions about SQL Server 2008 Express, visit the SQL Server 2008 forums on MSDN.


Register your personal copy of SQL Server 2008 Express if you haven't already done so.


Building and shipping applications with SQL Server 2008 Express? Sign up for free redistribution rights here.


Help improve SQL Server 2008 Express by submitting bugs to Microsoft Connect Feedback.


Top of page
Related resources
Samples and Sample Databases›
SQL Server Developer Center›
Home page for Microsoft SQL Server›
Related downloads
Microsoft SQL Server 2008 Books Online (October 2009)›
Microsoft® SQL Server® 2008 Express Edition Service Pack 1›
What others are downloading
Microsoft® SQL Server® 2008 Management Studio Express›
Microsoft® SQL Server® 2012 Express›
Microsoft® SQL Server® 2008 Express with Tools›
Microsoft SQL Server 2005 Express Edition›
Microsoft SQL Server Management Studio Express›
Windows Installer 4.5 Redistributable›

Other Microsoft sites

Windows

Office

Windows Phone

Xbox

Skype

Bing

Microsoft Store
Windows downloads
All Windows products
Windows 7
Windows XP
Internet Explorer
Office downloads
All Office products
Office 2010
Office clip art & templates
Security
Security updates & tools
Virus & malware
Microsoft Update
Download categories
All downloads
Drivers
Service packs
Security updates & tools
Popular resources
Free antivirus software
Malware removal tool
Windows 7 Home Premium
Microsoft Office academic edition
Laptops and desktop computers
Windows Phone devices
Windows Phone apps and games
Xbox 360 4GB with Kinect
Cloud computing solutions
Microsoft Dynamics Online CRM

United States

©2012 Copyright
Contact Us
Terms of Use
Trademarks
Privacy & Cookies

DHCP

Dynamic Host Configuration Protocol
From Wikipedia, the free encyclopedia
Jump to: navigation, search
"DHCP" redirects here. For other uses, see DHCP (disambiguation). This article has multiple issues. Please help improve it or discuss these issues on the talk page. This article needs additional citations for verification. (April 2010)
This article contains instructions, advice, or how-to content. (November 2010)
This article may be too technical for most readers to understand. (November 2010)



A DHCP Server settings tab

The Dynamic Host Configuration Protocol (DHCP) is a network protocol that is used to configure network devices so that they can communicate on an IP network. A DHCP client uses the DHCP protocol to acquire configuration information, such as an IP address, a default route and one or more DNS server addresses from a DHCP server. The DHCP client then uses this information to configure its host. Once the configuration process is complete, the host is able to communicate on the internet.

The DHCP server maintains a database of available IP addresses and configuration information. When it receives a request from a client, the DHCP server determines the network to which the DHCP client is connected, and then allocates an IP address or prefix that is appropriate for the client, and sends configuration information appropriate for that client.

Because the DHCP protocol must work correctly even before DHCP clients have been configured, the DHCP server and DHCP client must be connected to the same network link. In larger networks, this is not practical. On such networks, each network link contains one or more DHCP relay agents. These DHCP relay agents receive messages from DHCP clients and forward them to DHCP servers. DHCP servers send responses back to the relay agent, and the relay agent then sends these responses to the DHCP client on the local network link.

DHCP servers typically grant IP addresses to clients only for a limited interval. DHCP clients are responsible for renewing their IP address before that interval has expired, and must stop using the address once the interval has expired, if they have not been able to renew it.

DHCP is used for IPv4 and IPv6. While both versions serve much the same purpose, the details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered separate protocols.[1]

Hosts that do not use DHCP for address configuration may still use it to obtain other configuration information. Alternatively, IPv6 hosts may use stateless address autoconfiguration. IPv4 hosts may use link-local addressing to achieve limited local connectivity.Internet protocols
Application layer
DHCP · DHCPv6 · DNS · FTP · HTTP · IMAP · IRC · LDAP · MGCP · NNTP · NTP · POP · RPC · RTP · RTCP · RTSP · SIP · SMTP · SNMP · SOCKS · SSH · Telnet · TLS/SSL · XMPP · (more) ·
Transport layer
TCP · UDP · DCCP · SCTP · RSVP · (more) ·
Routing protocols *
BGP · OSPF · RIP · (more) ·
Internet layer
IP (IPv4 · IPv6) · ICMP · ICMPv6 · ECN · IGMP · IPsec · (more) ·
Link layer
ARP/InARP · NDP · Tunnels (L2TP) · PPP · Media access control (Ethernet · DSL · ISDN · FDDI) · (more) ·
* Not a layer. A routing protocol belongs either to application or network layer.
v · t · e ·
Contents [hide]
1 History
2 Technical overview
3 Technical details
3.1 DHCP discovery
3.2 DHCP offer
3.3 DHCP request
3.4 DHCP acknowledgement
3.5 DHCP information
3.6 DHCP releasing
3.7 Client configuration parameters in DHCP
3.8 DHCP options
3.8.1 Vendor identification
3.9 DHCP relaying
3.10 Reliability
4 Security
5 Confidentiality
6 See also
7 Notes
8 References
9 External links

[edit]
History

DHCP was first defined as a standards track protocol in RFC 1531 in October 1993, as an extension to the Bootstrap Protocol (BOOTP). The motivation for extending BOOTP was that BOOTP required manual intervention to add configuration information for each client, and did not provide a mechanism for reclaiming disused IP addresses.

Many worked to clarify the protocol as it gained popularity, and in 1997 RFC 2131 was released, and remains as of 2011 the standard for IPv4 networks. DHCPv6 is documented in RFC 3315. RFC 3633 added a DHCPv6 mechanism for prefix delegation. DHCPv6 was further extended to provide configuration information to clients configured using stateless address autoconfiguration in RFC 3736.

The BOOTP protocol itself was first defined in RFC 951 as a replacement for the Reverse Address Resolution Protocol RARP. The primary motivation for replacing RARP with BOOTP was that RARP was a data link layer protocol. This made implementation difficult on many server platforms, and required that a server be present on each individual network link. BOOTP introduced the innovation of a relay agent, which allowed the forwarding of BOOTP packets off the local network using standard IP routing, thus one central BOOTP server could serve hosts on many IP subnets.[2]
[edit]
Technical overview

Dynamic Host Configuration Protocol automates network-parameter assignment to network devices from one or more DHCP servers. Even in small networks, DHCP is useful because it makes it easy to add new machines to the network.

When a DHCP-configured client (a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information to a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the name servers, other servers such as time servers, and so forth. On receiving a valid request, the server assigns the computer an IP address, a lease (length of time the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting, and must complete before the client can initiate IP-based communication with other hosts. Upon disconnecting, the IP address is returned to the pool for use by another computer. This way, many other computers can use the same IP address within minutes of each other.

Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:
dynamic allocation: A network administrator assigns a range of IP addresses to DHCP, and each client computer on the LAN is configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed.
automatic allocation: The DHCP server permanently assigns a free IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had.
static allocation: The DHCP server allocates an IP address based on a table with MAC address/IP address pairs, which are manually filled in (perhaps by a network administrator). Only clients with a MAC address listed in this table will be allocated an IP address. This feature, which is not supported by all DHCP servers, is variously called Static DHCP Assignment by DD-WRT, fixed-address by the dhcpd documentation, Address Reservation by Netgear, DHCP reservation or Static DHCP by Cisco and Linksys, and IP reservation or MAC/IP binding by various other router manufacturers.
[edit]
Technical details

DHCP uses the same two ports assigned by IANA for BOOTP: destination UDP port 67 for sending data to the server, and UDP port 68 for data to the client. DHCP communications are connectionless in nature.

DHCP operations fall into four basic phases: IP discovery, IP lease offer, IP request, and IP lease acknowledgement. These points are often abbreviated as DORA (Discovery, Offer, Request, Acknowledgement).

DHCP clients and servers on the same subnet communicate via UDP broadcasts, initially. If the client and server are on different subnets, a DHCP Helper or DHCP Relay Agent may be used. Clients requesting renewal of an existing lease may communicate directly via UDP unicast, since the client already has an established IP address at that point.
[edit]
DHCP discovery

The client broadcasts messages on the physical subnet to discover available DHCP servers. Network administrators can configure a local router to forward DHCP packets to a DHCP server from a different subnet. This client-implementation creates a User Datagram Protocol (UDP) packet with the broadcast destination of 255.255.255.255 or the specific subnet broadcast address.

A DHCP client can also request its last-known IP address (in the example below, 192.168.1.100). If the client remains connected to a network for which this IP is valid, the server may grant the request. Otherwise, it depends whether the server is set up as authoritative or not. An authoritative server will deny the request, making the client ask for a new IP address immediately. A non-authoritative server simply ignores the request, leading to an implementation-dependent timeout for the client to give up on the request and ask for a new IP address.
DHCPDISCOVERUDP Src=0.0.0.0 sPort=68
Dest=255.255.255.255 dPort=67
OP HTYPE HLEN HOPS
0x01 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR (Client IP Address)
0x00000000
YIADDR (Your IP Address)
0x00000000
SIADDR (Server IP Address)
0x00000000
GIADDR (Gateway IP Address)
0x00000000
CHADDR (Client Hardware Address)
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0s, or overflow space for additional options. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Discover
DHCP option 50: 192.168.1.100 requested
DHCP option 55: Parameter Request List:

Request Subnet Mask (1), Router (3), Domain Name (15),
Domain Name Server (6)

[edit]
DHCP offer

When a DHCP server receives an IP lease request from a client, it reserves an IP address for the client and extends an IP lease offer by sending a DHCPOFFER message to the client. This message contains the client's MAC address, the IP address that the server is offering, the subnet mask, the lease duration, and the IP address of the DHCP server making the offer.

The server determines the configuration based on the client's hardware address as specified in the CHADDR (Client Hardware Address) field. Here the server, 192.168.1.1, specifies the IP address in the YIADDR (Your IP Address) field.
DHCPOFFERUDP Src=192.168.1.1 sPort=67
Dest=255.255.255.255 dPort=68
OP HTYPE HLEN HOPS
0x02 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR (Client IP Address)
0x00000000
YIADDR (Your IP Address)
0xC0A80164
SIADDR (Server IP Address)
0xC0A80101
GIADDR (Gateway IP Address)
0x00000000
CHADDR (Client Hardware Address)
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0s. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Offer
DHCP option 1: 255.255.255.0 subnet mask
DHCP option 3: 192.168.1.1 router
DHCP option 51: 86400s (1 day) IP lease time
DHCP option 54: 192.168.1.1 DHCP server
DHCP option 6: DNS servers 9.7.10.15, 9.7.10.16, 9.7.10.18

[edit]
DHCP request

In response to the DHCP offer, the client replies with a DHCP request, unicast to the server, requesting the offered address. A client can receive DHCP offers from multiple servers, but it will accept only one DHCP offer. Based on the Transaction ID field in the request, servers are informed whose offer the client has accepted. When other DHCP servers receive this message, they withdraw any offers that they might have made to the client and return the offered address to the pool of available addresses. In some cases DHCP request message is broadcast, instead of being unicast to a particular DHCP server, because the DHCP client has still not received an IP address. Also, this way one message can let all other DHCP servers know that another server will be supplying the IP address without missing any of the servers with a series of unicast messages.
DHCPREQUESTUDP Src=0.0.0.0 sPort=68
Dest=255.255.255.255 dPort=67
OP HTYPE HLEN HOPS
0x01 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR (Client IP Address)
0x00000000
YIADDR (Your IP Address)
0x00000000
SIADDR (Server IP Address)
0xC0A80101
GIADDR (Gateway IP Address)
0x00000000
CHADDR (Client Hardware Address)
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0s. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP Request
DHCP option 50: 192.168.1.100 requested
DHCP option 54: 192.168.1.1 DHCP server.

[edit]
DHCP acknowledgement

When the DHCP server receives the DHCPREQUEST message from the client, the configuration process enters its final phase. The acknowledgement phase involves sending a DHCPACK packet to the client. This packet includes the lease duration and any other configuration information that the client might have requested. At this point, the IP configuration process is completed.

The protocol expects the DHCP client to configure its network interface with the negotiated parameters.
DHCPACKUDP Src=192.168.1.1 sPort=67
Dest=255.255.255.255 dPort=68
OP HTYPE HLEN HOPS
0x02 0x01 0x06 0x00
XID
0x3903F326
SECS FLAGS
0x0000 0x0000
CIADDR (Client IP Address)
0x00000000
YIADDR (Your IP Address)
0xC0A80164
SIADDR (Server IP Address)
0xC0A80101
GIADDR (Gateway IP Address switched by relay)
0x00000000
CHADDR (Client Hardware Address)
0x00053C04
0x8D590000
0x00000000
0x00000000
192 octets of 0s. BOOTP legacy
Magic Cookie
0x63825363
DHCP Options
DHCP option 53: DHCP ACK
DHCP option 1: 255.255.255.0 subnet mask
DHCP option 3: 192.168.1.1 router
DHCP option 51: 86400s (1 day) IP lease time
DHCP option 54: 192.168.1.1 DHCP server
DHCP option 6: DNS servers 9.7.10.15, 9.7.10.16, 9.7.10.18


After the client obtains an IP address, the client may use the Address Resolution Protocol (ARP) to prevent IP conflicts caused by overlapping address pools of DHCP servers.
[edit]
DHCP information

A DHCP client may request more information than the server sent with the original DHCPOFFER. The client may also request repeat data for a particular application. For example, browsers use DHCP Inform to obtain web proxy settings via WPAD.
[edit]
DHCP releasing

The client sends a request to the DHCP server to release the DHCP information and the client deactivates its IP address. As client devices usually do not know when users may unplug them from the network, the protocol does not mandate the sending of DHCP Release.
[edit]
Client configuration parameters in DHCP

A DHCP server can provide optional configuration parameters to the client. RFC 2132 describes the available DHCP options defined by Internet Assigned Numbers Authority (IANA) - DHCP and BOOTP PARAMETERS.

A DHCP client can select, manipulate and overwrite parameters provided by a DHCP server.[3]
[edit]
DHCP options

Options are variably length octet strings. The first octet is the option code, the second octet is the number of following octets and the remaining octets are code dependent. For example, the DHCP Message type option for an Offer would appear as 0x35,0x01,0x02, where 0x35 is code 53 for "DHCP Message Type", 0x01 means one octet follows and 0x02 is the value of "Offer".

The following tables list the available DHCP options, as stated in RFC2132.[4]
RFC1497 vendor extensions[5]Code Name Length Notes
0 Pad[6] 1 octet Can be used to pad other options so that they are aligned to the word boundary
1 Subnet Mask[7] 4 octets Must be sent after the router option (option 3) if both are included
2 Time Offset[8] 4 octets
3 Router multiples of 4 octets Available routers, should be listed in order of preference
4 Time Server multiples of 4 octets Available time servers to synchronise with, should be listed in order of preference
5 Name Server multiples of 4 octets Available IEN116 name servers, should be listed in order of preference
6 Domain Name Server multiples of 4 octets Available DNS servers, should be listed in order of preference
7 Log Server multiples of 4 octets Available log servers, should be listed in order of preference.
8 Cookie Server multiples of 4 octets
9 LPR Server multiples of 4 octets
10 Impress Server multiples of 4 octets
11 Resource Location Server multiples of 4 octets
12 Host Name minimum of 1 octet
13 Boot File Size 2 octets Length of the boot image in 4KiB blocks
14 Merit Dump File minimum of 1 octet Path where crash dumps should be stored
15 Domain Name minimum of 1 octet
16 Swap Server 4 octets
17 Root Path minimum of 1 octet
18 Extensions Path minimum of 1 octet
255 End 0 octets Used to mark the end of the vendor option field

IP Layer Parameters per Host[9]Code Name Length Notes
19 IP Forwarding Enable/Disable 1 octet
20 Non-Local Source Routing Enable/Disable 1 octet
21 Policy Filter multiples of 8 octets
22 Maximum Datagram Reassembly Size 2 octets
23 Default IP Time-to-live 1 octet
24 Path MTU Aging Timeout 4 octets
25 Path MTU Plateau Table multiples of 2 octets

IP Layer Parameters per Interface[10]Code Name Length Notes
26 Interface MTU 2 octets
27 All Subnets are Local 1 octet
28 Broadcast Address 4 octets
29 Perform Mask Discovery 1 octet
30 Mask Supplier 1 octet
31 Perform Router Discovery 1 octet
32 Router Solicitation Address 4 octets
33 Static Route multiples of 8 octets A list of destination/router pairs

Link Layer Parameters per Interface[11]Code Name Length Notes
34 Trailer Encapsulation Option 1 octet
35 ARP Cache Timeout 4 octets
36 Ethernet Encapsulation 1 octet

TCP Parameters[12]Code Name Length Notes
37 TCP Default TTL 1 octet
38 TCP Keepalive Interval 4 octets
39 TCP Keepalive Garbage 1 octet

Application and Service Parameters[13]Code Name Length Notes
40 Network Information Service Domain minimum of 1 octet
41 Network Information Servers multiples of 4 octets
42 Network Time Protocol Servers multiples of 4 octets
43 Vendor Specific Information minimum of 1 octets
44 NetBIOS over TCP/IP Name Server multiples of 4 octets
45 NetBIOS over TCP/IP Datagram Distribution Server multiples of 4 octets
46 NetBIOS over TCP/IP Node Type 1 octet
47 NetBIOS over TCP/IP Scope minimum of 1 octet
48 X Window System Font Server multiples of 4 octets
49 X Window System Display Manager multiples of 4 octets
64 Network Information Service+ Domain minimum of 1 octet
65 Network Information Service+ Servers multiples of 4 octets
68 Mobile IP Home Agent multiples of 4 octets
69 Simple Mail Transport Protocol (SMTP) Server multiples of 4 octets
70 Post Office Protocol (POP3) Server multiples of 4 octets
71 Network News Transport Protocol (NNTP) Server multiples of 4 octets
72 Default World Wide Web (WWW) Server) multiples of 4 octets
73 Default Finger Server multiples of 4 octets
74 Default Internet Relay Chat (IRC) Server multiples of 4 octets
75 StreetTalk Server multiples of 4 octets
76 StreetTalk Directory Assistance (STDA) Server multiples of 4 octets

DHCP Extensions[14]Code Name Length Notes
50 Requested IP Address 4 octets
51 IP Address Lease Time 4 octets
52 Option Overload 1 octet
53 DHCP Message Type 1 octet
54 Server Identifier 4 octets
55 Parameter Request List minimum of 1 octet
56 Message minimum of 1 octet
57 Maximum DHCP Message Size 2 octets
58 Renewal (T1) Time Value 4 octets
59 Rebinding (T2) Time Value 4 octets
60 Vendor class identifier minimum of 1 octet
61 Client-identifier minimum of 2 octets
66 TFTP server name minimum of 1 octet
67 Bootfile name minimum of 1 octet

[edit]
Vendor identification

An option exists to identify the vendor and functionality of a DHCP client. The information is a variable-length string of characters or octets which has a meaning specified by the vendor of the DHCP client. One method that a DHCP client can utilize to communicate to the server that it is using a certain type of hardware or firmware is to set a value in its DHCP requests called the Vendor Class Identifier (VCI) (Option 60). This method allows a DHCP server to differentiate between the two kinds of client machines and process the requests from the two types of modems appropriately. Some types of set-top boxes also set the VCI (Option 60) to inform the DHCP server about the hardware type and functionality of the device. The value that this option is set to give the DHCP server a hint about any required extra information that this client needs in a DHCP response.
[edit]
DHCP relaying

In small networks, where only one IP subnet is being managed, DHCP clients communicate directly with DHCP servers. However, DHCP servers can also provide IP addresses for multiple subnets. In this case, a DHCP client that has not yet acquired an IP address cannot communicate directly with the DHCP server using IP routing, because it doesn't have a routable IP address, nor does it know the IP address of a router. In order to allow DHCP clients on subnets not directly served by DHCP servers to communicate with DHCP servers, DHCP relay agents can be installed on these subnets. The DHCP client broadcasts on the local link; the relay agent receives the broadcast and transmits it to one or more DHCP servers using unicast. The relay agent stores its own IP address in the GIADDR field of the DHCP packet. The DHCP server uses the GIADDR to determine the subnet on which the relay agent received the broadcast, and allocates an IP address on that subnet. When the DHCP server replies to the client, it sends the reply to the GIADDR address, again using unicast. The relay agent then retransmits the response on the local network.
[edit]
Reliability

The DHCP protocol provides reliability in several ways: periodic renewal, rebinding, and failover. DHCP clients are allocated leases that last for some period of time. Clients begin to attempt to renew their leases once half the lease interval has expired. They do this by sending a unicast DHCPREQUEST message to the DHCP server that granted the original lease. If that server is down or unreachable, it will fail to respond to the DHCPREQUEST. However, the DHCPREQUEST will be repeated by the client from time to time,[specify] so when the DHCP server comes back up or becomes reachable again, the DHCP client will succeed in contacting it, and renew its lease.

If the DHCP server is unreachable for an extended period of time,[specify] the DHCP client will attempt to rebind, by broadcasting its DHCPREQUEST rather than unicasting it. Because it is broadcast, the DHCPREQUEST message will reach all available DHCP servers. If some other DHCP server is able to renew the lease, it will do so at this time.

In order for rebinding to work, when the client successfully contacts a backup DHCP server, that server must have accurate information about the client's binding. Maintaining accurate binding information between two servers is a complicated problem; if both servers are able to update the same lease database, there must be a mechanism to avoid conflicts between updates on the independent servers. A standard for implementing fault-tolerant DHCP servers was developed at the Internet Engineering Task Force.[15][note 1]

If rebinding fails, the lease will eventually expire. When the lease expires, the client must stop using the IP address granted to it in its lease. At that time, it will restart the DHCP process from the beginning by broadcasting a DHCPDISCOVER message. Since its lease has expired, it will accept any IP address offered to it. Once it has a new IP address, presumably from a different DHCP server, it will once again be able to use the network. However, since its IP address has changed, any ongoing connections will be broken.
[edit]
Security

The base DHCP protocol does not include any mechanism for authentication.[16] Because of this, it is vulnerable to a variety of attacks. These attacks fall into three main categories:
Unauthorized DHCP servers providing false information to clients.[17]
Unauthorized clients gaining access to resources.[17]
Resource exhaustion attacks from malicious DHCP clients.[17]

Because the client has no way to validate the identity of a DHCP server, unauthorized DHCP servers can be operated on networks, providing incorrect information to DHCP clients. This can serve either as a denial-of-service attack, preventing the client from gaining access to network connectivity[citation needed], or as a man-in-the-middle attack. Because the DHCP server provides the DHCP client with server IP addresses, such as the IP address of one or more DNS servers,[17] an attacker can convince a DHCP client to do its DNS lookups through its own DNS server, and can therefore provide its own answers to DNS queries from the client.[18] This in turn allows the attacker to redirect network traffic through itself, allowing it to eavesdrop on connections between the client and network servers it contacts, or to simply replace those network servers with its own.[18]

Because the DHCP server has no secure mechanism for authenticating the client, clients can gain unauthorized access to IP addresses by presenting credentials, such as client identifiers, that belong to other DHCP clients.[citation needed] This also allows DHCP clients to exhaust the DHCP server's store of IP addresses—by presenting new credentials each time it asks for an address, the client can consume all the available IP addresses on a particular network link, preventing other DHCP clients from getting service.[citation needed]

DHCP does provide some mechanisms for mitigating these problems. The Relay Agent Information Option protocol extension (RFC 3046) allows network operators to attach tags to DHCP messages as these messages arrive on the network operator's trusted network. This tag is then used as an authorization token to control the client's access to network resources. Because the client has no access to the network upstream of the relay agent, the lack of authentication does not prevent the DHCP server operator from relying on the authorization token.[16]

Another extension, Authentication for DHCP Messages (RFC 3118), provides a mechanism for authenticating DHCP messages. Unfortunately RFC 3118 has not seen widespread adoption because of the problems of managing keys for large numbers of DHCP clients.[19]
[edit]
Confidentiality

In an ISP context, dhcp logs of address assignments either contain or are links to personally identifying confidential information, the contact details of the client. These are attractive to spammers, and may be sought for "fishing expeditions" by police agencies or litigators. At least one implementation[citation needed] mimics the Canadian Library Association policy for book circulation and does not retain identifying information once the "loan" has ended.
[edit]
See also Computer networking portal

DHCP snooping
IP address, especially Static and dynamic IP addresses
Peg DHCP (RFC 2322)
Preboot Execution Environment (PXE)
Reverse Address Resolution Protocol (RARP)
Rogue DHCP
Web Proxy Autodiscovery Protocol (WPAD)
Zeroconf — Zero Configuration Networking
UDP Helper Address — a tool for routing DHCP requests across subnet boundaries
Boot Service Discovery Protocol (BSDP), a DHCP extension used by Apple's NetBoot
BOOTP - earlier protocol for the same purpose
DHCPv6 - For use with ipv6
[edit]
Notes
^ The IETF proposal provided a mechanism whereby two servers could remain loosely in sync with each other in such a way that even in the event of a total failure of one server, the other server could recover the lease database and continue operating. Due to the length and complexity of the specification, it was never published as a standard; however, the techniques described in the specification are in wide use, with one open source implementation in the ISC DHCP server as well as several commercial implementations.
[edit]
References
^ Ralph Droms; Ted Lemon (2003). The DHCP Handbook. SAMS Publishing. p. 436. ISBN 0-672-32327-3.
^ Bill Croft; John Gilmore (September 1985). "RFC 951 - Bootstrap Protocol". Network Working Group.
^ In Unix-like systems this client-level refinement typically takes place according to the values in a /etc/dhclient.conf configuration file.
^ Alexander, Steve; Droms, Ralph (March 1997). DHCP Options and BOOTP Vendor Extensions. IETF. RFC 2132. Retrieved June 10, 2012.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 3: RFC 1497 vendor extensions. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 3.1: Pad Option. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 3.3: Subnet Mask. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 3.4: Time Offset. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 4: IP Layer Parameters per Host. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 5: IP Layer Parameters per Interface. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 6: Link Layer Parameters per Interface. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 7: TCP Parameters. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 8: Application and Service Parameters. Retrieved 2012-07-26.
^ Alexander, Steve; Droms, Ralph (March 1997). "RFC 2132: DHCP Options and BOOTP Vendor Extensions". IETF. Section 9: DHCP Extensions. Retrieved 2012-07-26.
^ Droms, Ralph; Kinnear, Kim; Stapp, Mark; Volz, Bernie; Gonczi, Steve; Rabil, Greg; Dooley, Michael; Kapur, Arun (March 2003). DHCP Failover Protocol. IETF. I-D draft-ietf-dhc-failover-12. Retrieved May 09, 2010.
^ a b Michael Patrick (January 2001). "RFC 3046 - DHCP Relay Agent Information Option". Network Working Group.
^ a b c d Ralph Droms (March 1997). "RFC 2131 - Dynamic Host Configuration Protocol". Network Working Group.
^ a b Sergey Golovanov (Kaspersky Labs) (June 2011). "TDSS loader now got "legs"".
^ Ted Lemon (April 2002). "Implementation of RFC 3118".
[edit]
External links
RFC 2131 - Dynamic Host Configuration Protocol
RFC 2132 - DHCP Options and BOOTP Vendor Extensions
RFC 3046 - DHCP Relay Agent Information Option
RFC 3942 - Reclassifying Dynamic Host Configuration Protocol Version Four (DHCPv4) Options
RFC 4242 - Information Refresh Time Option for Dynamic Host Configuration Protocol for IPv6
RFC 4361 - Node-specific Client Identifiers for Dynamic Host Configuration Protocol Version Four (DHCPv4)
RFC 4436 - Detecting Network Attachment in IPv4 (DNAv4)

Operating system

Operating system From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2011) Operating systems Common features Process management Interrupts Memory management File system Device drivers Networking (TCP/IP, UDP) Security (Process/Memory protection) I/O v t e
An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component of the system software in a computer system. Application programs require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX. Contents[hide] 1 Types of operating systems 2 History 2.1 Mainframes 2.2 Microcomputers 3 Examples of operating systems 3.1 UNIX and UNIX-like operating systems 3.1.1 BSD and its descendants 3.1.1.1 OS X 3.1.2 Linux and GNU 3.1.2.1 Google Chrome OS 3.2 Microsoft Windows 3.3 Other 4 Components 4.1 Kernel 4.1.1 Program execution 4.1.2 Interrupts 4.1.3 Modes 4.1.4 Memory management 4.1.5 Virtual memory 4.1.6 Multitasking 4.1.7 Disk access and file systems 4.1.8 Device drivers 4.2 Networking 4.3 Security 4.4 User interface 4.4.1 Graphical user interfaces 5 Real-time operating systems 6 Operating system development as a hobby 7 Diversity of operating systems and portability 8 See also 9 References 10 Further reading 11 External links [edit] Types of operating systems This section does not cite any references or sources. (February 2012) Real-time A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time. Multi-tasking vs. single-tasking A multi-tasking operating system allows more than one program to be running at a time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking. Distributed Further information: Distributed system A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system. Embedded Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. [edit] History Main article: History of operating systems See also: Resident monitor Early computers were built to perform a series of single tasks, like a calculator. Operating systems did not exist in their modern and more complex forms until the early 1960s.[4] Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Hardware features were added that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating system were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981). OS/360 was used on most IBM mainframe computers beginning in 1966, including the computers that helped NASA put a man on the moon. In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period of time and would arrive at a scheduled time with program and data on punched paper cards and/or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.[4] Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and generating computer code from human-readable symbolic code. This was the genesis of the modern-day computer system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.[citation needed] [edit] Mainframes Main article: Mainframe computer See also: History of IBM mainframe operating systems Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094. During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and applications written for OS/360 can still be run on modern machines.[citation needed] OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System). Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers. UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system. General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS). Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant. The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include: Burroughs MCP – B5000, 1961 to Unisys Clearpath/MCP, present. IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present. IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present. UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present. [edit] Microcomputers PC-DOS was an early personal computer OS that featured a command line interface. Mac OS by Apple Computer became the first widespread OS to feature a graphical user interface. Many of its features such as windows and icons would later become commonplace in GUIs. The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the '80s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS. The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X. The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD. [edit] Examples of operating systems [edit] UNIX and UNIX-like operating systems Evolution of Unix systems Main article: Unix Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The UNIX-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX. Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas. Four operating systems are certified by the The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris Operating System can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's Mac OS X, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD. Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants. [edit] BSD and its descendants The first server for the World Wide Web ran on NeXTSTEP, based on BSD. Main article: Berkeley Software Distribution A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The world wide web was also first demonstrated on a number of computers running an OS based on BSD called NextStep. BSD has its roots in Unix. In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T. Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web. Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NetBSD. [edit] OS X Main article: OS X The standard user interface of Mac OS X Mac OS X is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997. The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0 "Cheetah") following in March 2001. Since then, six more distinct "client" and "server" editions of Mac OS X have been released, the most recent being OS X 10.8 "Mountain Lion", which was first made available on February 16, 2012 for developers, and was then released to the public on July 25, 2012. Releases of Mac OS X are named after big cats. The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. In Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version.[5] [edit] Linux and GNU Main articles: GNU, Linux, and Linux kernel Ubuntu, desktop Linux distribution Android, a popular mobile operating system using the Linux kernel Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest that Linux is used on 1.82% of all personal computers,[6][7] it has been widely adopted for use in servers[8] and embedded systems[9] (such as cell phones). Linux has superseded Unix in most places[which?], and is used on the 10 most powerful supercomputers in the world.[10] The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android. The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to Unix but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts with the Linux kernel in order to create a full-fledged operating system. [edit] Google Chrome OS Main article: Google Chrome OS Chrome is an operating system based on the Linux kernel and designed by Google. Since Chrome OS targets computer users who spend most of their time on the Internet, it is mainly a web browser with no ability to run applications. It relies on Internet applications (or Web apps) used in the web browser to accomplish tasks such as word processing and media viewing, as well as online storage for storing most files. [edit] Microsoft Windows Main article: Microsoft Windows Bootable Windows To Go USB flash drive Microsoft Windows 7 Desktop Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[7][11][12][13] The newest version is Windows 8 for workstations and Windows Server 2012 for servers. Windows 7 recently overtook Windows XP as most used OS.[14][15][16] Microsoft Windows originated in 1985 as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[17][18] and 16 bits Windows 3.x[19] drivers. Windows Me, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture. In the past, Windows NT supported non-Intel architectures. Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers, as Windows competes against Linux and BSD for server market share.[20][21] [edit] Other There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9. [edit] Components The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or as complex as an Internet component. [edit] Kernel A kernel connects the application software to the hardware of a computer. Main article: Kernel (computing) With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc. [edit] Program execution Main article: Process (computing) The operating system provides an interface between an application program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices. [edit] Interrupts Main article: Interrupt Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative — having the operating system "watch" the various sources of input for events (polling) that require action — can be found in older systems with very small stacks (50 or 60 bytes) but are unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place. When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program. When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means. A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention. [edit] Modes Main articles: Protected mode and Supervisor mode Privilege rings for the x86 available in protected mode. Operating systems determine which processes run in each mode. Modern CPUs support multiple modes of operation. CPUs with this capability use at least two modes: protected mode and supervisor mode. The supervisor mode is used by the operating system's kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is written and erased, and communication with devices like graphics cards. Protected mode, in contrast, is used for almost everything else. Applications operate within protected mode, and can only use hardware by communicating with the kernel, which controls everything in supervisor mode. CPUs might have other modes similar to protected mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one. When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS or EFI, bootloader, and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode. In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory. The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program). [edit] Memory management Main article: Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers. In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error. Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway. [edit] Virtual memory Main article: Virtual memory Further information: Page fault Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory, called virtual memory. The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault. When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[22] [edit] Multitasking Main articles: Computer multitasking and Process management (computing) Further information: Context switch, Preemptive multitasking, and Cooperative multitasking Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch. An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop. Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.) On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having pre-emptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals). [edit] Disk access and file systems Main article: Virtual file system Filesystems allow users and programs to organize and sort files on a computer, often through the use of directories (or "folders") Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree. Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system. While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers. A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices. When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates. Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software). Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB flash drive, or even contained within a file located on another file system. [edit] Device drivers Main article: Device driver A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs. The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view. Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do. [edit] Networking Main article: Computer network Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface. Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel. Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware. [edit] Security Main article: Computer security A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel. The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs. In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured. External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information. Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port. An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java. Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing. [edit] User interface A screenshot of the Bourne Again Shell command line. Each command is typed out after the 'prompt', and then its output appears below, working its way down the screen. The current command prompt is at the bottom. Main article: Operating system user interface Every computer that is to be operated by an individual requires a user interface. The user interface is usually referred to as a shell and is essential if human interaction is to be supported. The user interface views the directory structure and requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present. [edit] Graphical user interfaces A screenshot of the KDE Plasma Desktop graphical user interface. Programs take the form of images on the screen, and the files, folders (directories), and applications take the form of icons and symbols. A mouse is used to navigate the computer. Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of Mac OS, the GUI is integrated into the kernel. While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel. Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE Plasma Desktop is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows. Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed). Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.[23] [edit] Real-time operating systems Main article: Real-time operating system A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems. An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System. Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase.[citation needed] Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b. Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing. [edit] Operating system development as a hobby See also: Hobbyist operating system development Operating system development is one of the most complicated activities in which a computing hobbyist may engage. A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers. [24] In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests. Examples of a hobby operating system include ReactOS and Syllable. [edit] Diversity of operating systems and portability Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained. This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries. Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs. [edit] See also Comparison of operating systems Handheld computers Hypervisor Interruptible operating system List of important publications in operating systems List of operating systems Microcontroller Computer Science portal Information technology portal Computer networking portal Network operating system Object-oriented operating system Operating System Projects PCjacking System image Timeline of operating systems Usage share of operating systems [edit] References ^ Stallings (2005). Operating Systems, Internals and Design Principles. Pearson: Prentice Hall. p. 6. ^ Dhotre, I.A. (2009). Operating Systems.. Technical Publications. p. 1. ^ "Operating System Market Share". Net Applications. http://marketshare.hitslink.com/operating-system-market-share.aspx?qprid=10. ^ a b Hansen, Per Brinch, ed. (2001). Classic Operating Systems. Springer. pp. 4–7. ISBN 0-387-95113-X. http://books.google.com/?id=-PDPBvIPYBkC&lpg=PP1&pg=PP1#v=onepage&q. ^ "OS X Mountain Lion - Move your Mac even further ahead". Apple. http://www.apple.com/macosx/lion/. Retrieved 2012-08-07. ^ Usage share of operating systems ^ a b "Top 5 Operating Systems from January to April 2011". StatCounter. October 2009. http://gs.statcounter.com/#os-ww-monthly-201101-201104-bar. Retrieved November 5, 2009. ^ "IDC report into Server market share". Idc.com. http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22360110§ionId=null&elementId=null&pageType=SYNOPSIS. Retrieved 2012-08-07. ^ Linux still top embedded OS ^ Tom Jermoluk (2012-08-03). "TOP500 List – November 2010 (1–100) | TOP500 Supercomputing Sites". Top500.org. http://www.top500.org/list/2010/11/100. Retrieved 2012-08-07. ^ "Global Web Stats". Net Market Share, Net Applications. May 2011. http://marketshare.hitslink.com/operating-system-market-share.aspx?qprid=8. Retrieved 2011-05-07. ^ "Global Web Stats". W3Counter, Awio Web Services. September 2009. http://www.w3counter.com/globalstats.php. Retrieved 2009-10-24. ^ "Operating System Market Share". Net Applications. October 2009. http://marketshare.hitslink.com/operating-system-market-share.aspx?qprid=8. Retrieved November 5, 2009. ^ "w3schools.com OS Platform Statistics". http://www.w3schools.com/browsers/browsers_os.asp. Retrieved October 30, 2011. ^ "Stats Count Global Stats Top Five Operating Systems". http://gs.statcounter.com/#os-ww-monthly-201010-201110. Retrieved October 30, 2011. ^ "Global statistics at w3counter.com". http://www.w3counter.com/globalstats.php. Retrieved 23 January 2012. ^ "Troubleshooting MS-DOS Compatibility Mode on Hard Disks". Support.microsoft.com. http://support.microsoft.com/kb/130179/EN-US. Retrieved 2012-08-07. ^ "Using NDIS 2 PCMCIA Network Card Drivers in Windows 95". Support.microsoft.com. http://support.microsoft.com/kb/134748/en. Retrieved 2012-08-07. ^ "INFO: Windows 95 Multimedia Wave Device Drivers Must be 16 bit". Support.microsoft.com. http://support.microsoft.com/kb/163354/en. Retrieved 2012-08-07. ^ "Operating System Share by Groups for Sites in All Locations January 2009". http://news.netcraft.com/SSL-Survey/CMatch/osdv_all. ^ "Behind the IDC data: Windows still No. 1 in server operating systems". ZDNet. 2010-02-26. http://blogs.zdnet.com/microsoft/?p=5408. ^ Stallings, William (2008). Computer Organization & Architecture. New Delhi: Prentice-Hall of India Private Limited. p. 267. ISBN 978-81-203-2962-1. ^ Poisson, Ken. "Chronology of Personal Computer Software". Retrieved on 2008-05-07. Last checked on 2009-03-30. ^ "My OS is less hobby than yours". Osnews. December 21, 2009. http://www.osnews.com/story/22638/My_OS_Is_Less_Hobby_than_Yours. Retrieved December 21, 2009. Windows to surpass Android by 2015 [edit] Further reading Auslander, Marc A.; Larkin, David C.; Scherr, Allan L. (1981). The evolution of the MVS Operating System. IBM J. Research & Development. http://www.research.ibm.com/journal/rd/255/auslander.pdf. Deitel, Harvey M.; Deitel, Paul; Choffnes, David. Operating Systems. Pearson/Prentice Hall. ISBN 978-0-13-092641-8. Bic, Lubomur F.; Shaw, Alan C. (2003). Operating Systems. Pearson: Prentice Hall. Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts. John Wiley & Sons. ISBN 0-470-12872-0. [edit] External links Look up operating system in Wiktionary, the free dictionary. Wikimedia Commons has media related to: Screenshots of operating systems

Information technology (IT)

Science, technology and society Related fields and sub-fields Bibliometrics ·Categorization Censorship ·Classification Computer data storage ·Cultural studies Data modeling ·Informatics Information technology Intellectual freedom Intellectual property ·Memory Library and information science Preservation ·Privacy Portal v t e Information technology From Wikipedia, the free encyclopedia Jump to: navigation, search Information science General aspects Information access ·Information architecture Information management Information retrieval Information seeking ·Information society Knowledge organization ·Ontology Philosophy of information Science, technology and society Related fields and sub-fields Bibliometrics ·Categorization Censorship ·Classification Computer data storage ·Cultural studies Data modeling ·Informatics Information technology Intellectual freedom Intellectual property ·Memory Library and information science Preservation ·Privacy Portal v t e
Information technology (IT) can be defined in various ways, but is broadly considered to encompass the use of computers and telecommunications equipment to store, retrieve, transmit and manipulate data.[1] The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones.[2] Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC,[3] but the term "information technology" in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Leavitt and Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)."[4] Based on the storage and processing technology employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940) and electronic.[3] This article focuses on the latter of those periods, which began in about 1940. Contents[hide] 1 Definitions 2 History of computers 3 Data storage 4 Databases 5 Data retrieval 6 Data transmission 7 Data manipulation 8 Commercial perspective 9 Social and ethical perspectives 10 See also 11 References 12 Further reading 13 External links [edit] Definitions In a business context, the Information Technology Association of America has defined information technology (IT) as "the study, design, development, application, implementation, support or management of computer-based information systems".[5] In an academic context, the Association for Computing Machinery defines it as "undergraduate degree programs that prepare students to meet the computer technology needs of business, government, healthcare, schools, and other kinds of organizations .... IT specialists assume responsibility for selecting hardware and software products appropriate for an organization, integrating those products with organizational needs and infrastructure, and installing, customizing, and maintaining those applications for the organization’s computer users. Examples of these responsibilities include the installation of networks; network administration and security; the design of web pages; the development of multimedia resources; the installation of communication components; the oversight of email systems; and the planning and management of the technology lifecycle by which an organization’s technology is maintained, upgraded, and replaced."[6] [edit] History of computers Main article: History of computing hardware Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick.[7] The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered to be the earliest known mechanical analog computer; it is also the earliest known geared mechanism.[8] Comparable geared devices did not emerge in Europe until the 16th century,[9] and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.[10] Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. Colossus, developed during the Second World War to decrypt German messages was the first electronic digital computer, but although programmable it was not general-purpose, being designed for a single task. Neither did it store its programs in memory; programming was carried out using plugs and switches to alter the internal wiring.[11] The first recognisably modern electronic digital stored-program computer was the Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on 21 June 1948.[12] [edit] Data storage Main article: Data storage device Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete.[13] Electronic data storage as used in modern computers dates from the Second World War, when a form of delay line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line.[14] The first random-access digital storage device was the Williams tube, based on a standard cathode ray tube,[15] but the information stored in it and delay line memory was volatile in that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932[16] and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer.[17] Most digital data today is still stored magnetically on devices such as hard disk drives, or optically on media such as CD-ROMs.[18] It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007,[19] doubling roughly every 3 years.[20] [edit] Databases Main article: Database management system Database management systems emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. One of the earliest such systems was IBM's Information Management System (IMS),[21] which is still widely deployed more than 40 years later.[22] IMS stores data hierarchically,[21] but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows and columns. The first commercially available relational database management system (RDBMS) was available from Oracle in 1980.[23] All database management systems consist of a number of components that together allow the data they store to be accessed simultaneously by many users while maintaining its integrity. A characteristic of all databases is that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.[21] The extensible markup language (XML) has become a popular format for data representation in recent years. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort".[24] As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine and human-readable.[25] [edit] Data retrieval The relational database model introduced a programming language independent Structured Query Language (SQL), based on relational algebra.[23] The terms "data" and "information" are not synonymous. Anything stored is data, but it only becomes information when it is organised and presented meaningfully.[26] Most of the world's digital data is unstructured, and stored in a variety of different physical formats[27][a] even within a single organisation. Data warehouses began to be developed in the 1980s to integrate these disparate stores. They typically contain data extracted from various sources, including external sources such as the Internet, organised in such a way as to facilitate decision support systems (DSS).[28] [edit] Data transmission Data transmission has three aspects: transmission, propagation, and reception.[29] XML has been increasingly employed as a means of data interchange since the early 2000s,[30] particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP,[25] describing "data-in-transit rather than ... data-at-rest".[30] One of the challenges of such usage is converting data from relational databases into XML Document Object Model (DOM) structures.[31] [edit] Data manipulation Hilbert and Lopez[19] identify the exponential pace of technological change (a kind of Moore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.[19] Massive amounts of data are stored worldwide every day, but unless it can be analysed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited".[32] To address that issue, the field of data mining – "the process of discovering interesting patterns and knowledge from large amounts of data"[33] – emerged in the late 1980s.[34] [edit] Commercial perspective Worldwide IT spending forecast[35] (billions of U.S. dollars) Category 2011 spending 2012 spending Computing hardware 404 423 Enterprise software 269 290 IT services 845 864 Telecom equipment 340 377 Telecom services 1,663 1,686 Total 3,523 3,640 [edit] Social and ethical perspectives Main article: Information ethics The field of information ethics was established by mathematician Norbert Wiener in the 1940s.[36] Some of the ethical issues associated with the use of information technology include:[37] Breaches of copyright by those downloading files stored without the permission of the copyright holders Employers monitoring their employees' emails and other Internet usage Unsolicited emails Hackers accessing online databases Web sites installing cookies or spyware to monitor a user's online activities [edit] See also Information systems (IS) [edit] References Notes ^ "Format" refers to the physical characteristics of the stored data such as its encoding scheme; "structure" describes the organisation of that data. Citations ^ Daintith, John, ed. (2009), "IT", A Dictionary of Physics, Oxford University Press, http://www.oxfordreference.com/views/ENTRY.html?subview=Main&entry=t83.e1592, retrieved 1 August 2012 (subscription required) ^ Chandler, Daniel; Munday, Rod, "Information technology", A Dictionary of Media and Communication (first ed.), Oxford University Press, http://www.oxfordreference.com/views/ENTRY.html?subview=Main&entry=t326.e1343, retrieved 1 August 2012 (subscription required) ^ a b Butler, Jeremy G., "A History of Information Technology and Systems", University of Arizona, http://www.tcf.ua.edu/AZ/ITHistoryOutline.htm, retrieved 2 August 2012 ^ Leavitt, Harold J.; Whisler, Thomas L. Whisler (1958), "Management in the 1980s", Harvard Business Review 11, http://hbr.org/1958/11/management-in-the-1980s ^ Proctor 2011, preface. ^ The Joint Task Force for Computing Curricula 2005. Computing Curricula 2005: The Overview Report (pdf) ^ Schmandt-Besserat, D. (1981), "Decipherment of the earliest tablets", Science 211 (4479): 283–285, doi:10.1126/science.211.4479.283, PMID 17748027 (subscription required) ^ Wright 2012, p. 279 ^ Childress 2000, p. 94 ^ Chaudhuri 2004, p. 3 ^ Lavington 1980 ^ Enticknap, Nicholas (Summer 1998), "Computing's Golden Jubilee", Resurrection (The Computer Conservation Society) (20), ISSN 0958-7403, http://www.cs.man.ac.uk/CCS/res/res20.htm#d, retrieved 19 April 2008 ^ Alavudeen & Venkateshwaran 2010, p. 178 ^ Lavington 1998, p. 1 ^ "Early computers at Manchester University", Resurrection (The Computer Conservation Society) 1 (4), Summer 1992, ISSN 0958-7403, http://www.cs.man.ac.uk/CCS/res/res04.htm#g, retrieved 19 April 2008 ^ Universität Klagenfurt, ed., "Magnetic drum", Virtual Exhibitions in Informatics, http://cs-exhibitions.uni-klu.ac.at/index.php?id=222, retrieved 21 August 2011 ^ The Manchester Mark 1, University of Manchester, http://www.digital60.org/birth/manchestercomputers/mark1/manchester.html, retrieved 24 January 2009 ^ Wang & Taratorin 1999, pp. 4–5. ^ a b c Hilbert, Martin; López, Priscilla, "The World's Technological Capacity to Store, Communicate, and Compute Information", Science 332 (6025): 60–65, http://www.sciencemag.org/content/332/6025/60, retrieved 1 August 2012 ^ "video animation on The World's Technological Capacity to Store, Communicate, and Compute Information from 1986 to 2010 ^ a b c Ward & Dafoulas 2006, p. 2 ^ Olofson, Carl W. (October 2009), "A Platform for Enterprise Data Services", IDC, http://public.dhe.ibm.com/software/data/sw-library/ims/idc-power-of-ims.pdf, retrieved 7 August 2012 ^ a b Ward & Dafoulas 2006, p. 3 ^ Pardede 2009, p. 2 ^ a b Pardede 2009, p. 4 ^ Kedar 2009, pp. 1–9 ^ van der Aalst 2011, p. 2 ^ Dyché 2000, pp. 4–6 ^ Weik 2000, p. 361 ^ a b Pardede 2009, p. xiii. ^ Lewis 2003, pp. 228–231. ^ Han, Kamber & Pei 2011, p. 5 ^ Han, Kamber & Pei 2011, p. 8 ^ Han, Kamber & Pei 2011, p. xxiii ^ "Gartner Says Worldwide IT Spending On Pace to Surpass $3.6 Trillion in 2012". http://www.gartner.com/it/page.jsp?id=2074815. Retrieved July 17, 2012. ^ Bynum 2008, p. 9. ^ Reynolds 2009, pp. 20–21. Bibliography Alavudeen, A.; Venkateshwaran, N. (2010), Computer Integrated Manufacturing, PHI Learning, ISBN 978-81-203-3345-1 Bynum, Terrell Ward (2008), "Norbert Wiener and the Rise of Information Ethics", in van den Hoven, Jeroen; Weckert, John, Information Technology and Moral Philosophy, Cambridge University Press, ISBN 978-0-521-85549-5 Chaudhuri, P. Pal (2004), Computer Organization and Design, PHI Learning, ISBN 978-81-203-1254-8 Childress, David Hatcher (2000), Technology of the Gods: The Incredible Sciences of the Ancients, Adventures Unlimited Press, ISBN 978-0-932813-73-2 Dyché, Jill (2000), Turning Data Into Information With Data Warehousing, Addison Wesley, ISBN 978-0-201-65780-7 Han, Jiawei; Kamber, Micheline; Pei, Jian (2011), Data Minining: Concepts and Techniques (3rd ed.), Morgan Kaufman, ISBN 978-0-12-381479-1 Kedar, Seema (2009), Database Management Systems, Technical Publications, ISBN 978-81-8431-584-4 Lavington, Simon (1980), Early British Computers, Digital Press, ISBN 978-0-7190-0810-8 Lavington, Simon (1998), A History of Manchester Computers (2 ed.), The British Computer Society, ISBN 978-1-902505-01-5 Lewis, Bryn (2003), "Extraction of XML from Relational Databases", in Chaudhri, Akmal B.; Djeraba, Chabane; Unland, Rainer et al., XML-Based Data Management and Multimedia Engineering – EDBT 2002 Workshops, Springer, ISBN 978-3540001300 Pardede, Eric (2009), Open and Novel Issues in XML Database Applications, Information Science Reference, ISBN 978-1-60566-308-1 Proctor, K. Scott (2011), Optimizing and Assessing Information Technology: Improving Business Project Execution, John Wiley & Sons, ISBN 978-1-118-10263-3 Reynolds, George (2009), Ethics in Information Technology, Cengage Learning, ISBN 978-0-538-74622-9 van der Aalst, Wil M. P. (2011), Process Mining: Discovery, Conformance and Enhancement of Business Processes, Springer, ISBN 978-3-642-19344-6 Wang, Shan X.; Taratorin, Aleksandr Markovich (1999), Magnetic Information Storage Technology, Academic Press, ISBN 978-0-12-734570-3 Ward, Patricia; Dafoulas, George S. (2006), Database Management Systems, Cengage Learning EMEA, ISBN 978-1-84480-452-8 Weik, Martin (2000), Computer Science and Communications Dictionary, 2, Springer, ISBN 978-0-7923-8425-0 Wright, Michael T. (2012), "The Front Dial of the Antikythera Mechanism", in Koetsier, Teun; Ceccarelli, Marco, Explorations in the History of Machines and Mechanisms: Proceedings of HMM2012, Springer, pp. 279–292, ISBN 978-94-007-4131-7 [edit] Further reading Allen, T., and M. S. Morton, eds. 1994. Information Technology and the Corporation of the 1990s. New York: Oxford University Press. Shelly, Gary, Cashman, Thomas, Vermaat, Misty, and Walker, Tim. (1999). Discovering Computers 2000: Concepts for a Connected World. Cambridge, Massachusetts: Course Technology. Webster, Frank, and Robins, Kevin. (1986). Information Technology—A Luddite Analysis. Norwood, NJ: Ablex. [edit] External links The Global Information Technology Report 2008–2009 [show] v t e Major information technology companies List of the largest technology companies List of the largest software companies Semiconductor sales leaders by year Consulting and
outsourcing Accenture Atos Booz Allen Hamilton BT CACI Capgemini CGI Group Cognizant CSC Deloitte Dell Services Fujitsu HCL Technologies Hitachi Consulting HP IBM Indra Infosys NEC NTT Data Orange Business Services TCS T-Systems Unisys Wipro Imaging Canon HP Kodak Konica Minolta Kyocera Lexmark Nikon Ricoh Seiko Epson Sharp Toshiba Xerox Information storage Dell EMC Fujitsu Hitachi Data Systems HP IBM NetApp Oracle Samsung Seagate Western Digital Internet Amazon.com Baidu eBay Google IAC Microsoft NHN NetEase Rakuten Tencent Yahoo! Mainframes Fujitsu IBM Mobile devices Apple HTC Huawei LG Motorola Mobility NEC Casio Nokia RIM Samsung Sony ZTE Networking equipment Alcatel-Lucent Avaya Cisco Datang Ericsson Fujitsu HP Huawei Juniper Mitsubishi Electric Motorola Solutions NEC Nokia Siemens Samsung ZTE OEMs Celestica Compal Electronics Flextronics Foxconn Jabil Quanta Sanmina-SCI TPV Technology Wistron Personal computersand servers Acer Apple Asus Dell Fujitsu HP Lenovo LG NEC Samsung Sony Toshiba Servers only IBM Oracle Unisys Point of sale IBM NCR Semiconductors Advanced Micro Devices Broadcom Freescale Semiconductor Fujitsu LG Infineon Technologies Intel Marvell Technology Group MediaTek Micron Technology Nvidia NXP Panasonic Qualcomm Renesas Electronics Samsung SK Hynix Sony STMicroelectronics Texas Instruments Toshiba VIA Technologies Foundries GlobalFoundries TSMC Software Adobe Apple CA Google HP IBM Intuit Microsoft Oracle SAP Symantec VMware Telecommunicationsservices América Móvil AT&T Bell Canada BT Bharti Airtel CenturyLink China Mobile China Telecom China Unicom Comcast Deutsche Telekom France Télécom Hutchison KDDI KPN KT MTS NTT NTT DoCoMo Rogers SingTel SK Telecom SoftBank Mobile Sprint Nextel Swisscom Telenor Telecom Italia Telefónica TeliaSonera Verizon VimpelCom Vivendi Vodafone Methodology: FY2010/11 applicable revenues of over: group 1-3, 6-12 - US$3 billion; group 4 - US$1.5 billion; group 5 - US$1 billion; group 13 - US$10 billion [show] v t e Technology Outline of technology Outline of applied science Fields Agriculture Agricultural engineering Aquaculture Fisheries science Food chemistry Food engineering Food microbiology Food technology GURT ICT in agriculture Nutrition Biomedical Bioinformatics Biological engineering Biomechatronics Biomedical engineering Biotechnology Cheminformatics Genetic engineering Healthcare science Medical research Medical technology Nanomedicine Neuroscience Neurotechnology Pharmacology Reproductive technology Tissue engineering Buildings andConstruction Acoustical engineering Architectural engineering Building services engineering Civil engineering Construction engineering Domestic technology Facade engineering Fire protection engineering Safety engineering Sanitary engineering Structural engineering Educational Educational software Digital technologies in education ICT in education Impact Multimedia learning Virtual campus Virtual education Energy Nuclear engineering Nuclear technology Petroleum engineering Soft energy technology Environmental Clean technology Clean coal technology Ecological design Ecological engineering Ecotechnology Environmental engineering Environmental engineering science Green building Green nanotechnology Landscape engineering Renewable energy Sustainable design Sustainable engineering Industrial Automation Business informatics Engineering management Enterprise engineering Financial engineering Industrial biotechnology Industrial engineering Metallurgy Mining engineering Productivity improving technologies Research and development IT and communications Artificial intelligence Broadcast engineering Computer engineering Computer science Information technology Music technology Ontology engineering RF engineering Software engineering Telecommunications engineering Visual technology Military Army engineering maintenance Electronic warfare Military communications Military engineering Stealth technology Transport Aerospace engineering Automotive engineering Naval architecture Space technology Traffic engineering Transport engineering Other applied sciences Cryogenics Electro-optics Electronics Engineering geology Engineering physics Hydraulics Materials science Microtechnology Nanotechnology Other engineering fields Audio Biochemical Ceramic Chemical Control Electrical Electronic Entertainment Geotechnical Hydraulic Mechanical Mechatronics Optical Protein Quantum Robotics Systems Components Infrastructure Invention Timeline Knowledge Machine Skill Craft Tool Gadget History Prehistoric technology Neolithic Revolution Ancient technology Medieval technology Renaissance technology Industrial Revolution Second Jet Age Information Age Theories andconcepts Appropriate technology Critique of technology Diffusion of innovations Disruptive innovation Dual-use technology Ephemeralization Ethics of technology High tech Hype cycle Inevitability thesis Low-technology Mature technology Philosophy of technology Strategy of Technology Technicism Techno-progressivism Technocapitalism Technocentrism Technocracy Technocriticism Technoetic Technological change Technological convergence Technological determinism Technological escalation Technological evolution Technological fix Technological innovation system Technological momentum Technological nationalism Technological rationality Technological revival Technological singularity Technological somnambulism Technological utopianism Technology lifecycle Technology acceptance model Technology adoption lifecycle Technorealism Transhumanism Other Emerging technologies List Fictional technology High-technology business districts Kardashev scale List of technologies Science and technology by country Technology alignment Technology assessment Technology brokering Technology companies Technology demonstration Technology education Technical universities and colleges Technology evangelist Technology governance Technology integration Technology journalism Technology management Technology shock Technology strategy Technology and society Technology transfer Book Category Commons Portal Wikiquotes Retrieved from "http://en.wikipedia.org/w/index.php?title=Information_technology&oldid=523405321" View page ratings Rate this page Rate this page Page ratings What's this? Current average ratings. Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) I have a relevant college/university degree It is part of my profession It is a deep personal passion The source of my knowledge is not listed here I would like to help improve Wikipedia, send me an e-mail (optional) We will send you a confirmation e-mail. We will not share your e-mail address with outside parties as per our feedback privacy statement. Submit ratings Saved successfully Your ratings have not been submitted yet Your ratings have expired Please reevaluate this page and submit new ratings. An error has occurred. Please try again later. Thanks! Your ratings have been saved. Do you want to create an account? An account will help you track your edits, get involved in discussions, and be a part of the community.Create an accountorLog inMaybe later Thanks! Your ratings have been saved. Did you know that you can edit this page? Edit this pageMaybe later Categories: Applied sciences Information technology Media technology Outsourcing Hidden categories: Pages containing links to subscription only content Personal tools Create account Log in Namespaces Article Talk Variants Views Read Edit View history Actions Search Navigation Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox What links here Related changes Upload file Special pages Permanent link Page information Cite this page Rate this page Print/export Create a book Download as PDF Printable version Languages Afrikaans Aragonés العربية বাংলা Беларуская Беларуская (тарашкевіца)‎ Български Bosanski Česky Cymraeg Dansk Deutsch Eesti Esperanto فارسی Gaelg Galego 贛語 한국어 Հայերեն हिन्दी Hrvatski Bahasa Indonesia Íslenska עברית Basa Jawa ಕನ್ನಡ ქართული Қазақша Kiswahili Кыргызча ລາວ Latina Latviešu Lietuvių Magyar മലയാളം मराठी مصرى Bahasa Melayu Mirandés မြန်မာဘာသာ Nederlands नेपाली 日本語 Norsk (bokmål)‎ Norsk (nynorsk)‎ Occitan Олык марий Oʻzbekcha پنجابی Polski Português Română Русиньскый Русский Саха тыла Setswana Shqip සිංහල Simple English Slovenčina Slovenščina کوردی Српски / srpski Srpskohrvatski / српскохрватски Suomi Svenska Tagalog தமிழ் Татарча/tatarça ไทย Türkçe Українська اردو Tiếng Việt Winaray ייִדיש 粵語 中文 This page was last modified on 16 November 2012 at 23:15. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details.Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

NETWORK CABLE AND CONNECTORS

 

 

 

 

IT DEPARTMENT

                                      White Paper

 

NETWORK CABLE AND CONNECTORS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

NETWORK CABLE AND CONNECTORS

 

There are several classifications of cable used for twisted-pair networks.  I'll skip right over them and state that I use and recommend Category 5 (or CAT 5) cable for all new installations.  Likewise, there are several fire code classifications for the outer insulation of CAT 5 cable.  I use CMR cable, or "riser cable," for most of the wiring I do.  You should also be aware of CMP or plenum cable (a plenum is used to distribute air in a building).  You may be required by local, state or national codes to use the more expensive plenum-jacketed cable if it runs through suspended ceilings, ducts, or other areas, if they are used to circulate air or act as an air passage from one room to another.  If in doubt, use plenum.  CMR cable is generally acceptable for all applications not requiring plenum cable.

 

 

 

 

 

 

 

 

 

 

 

 

 

CAT 5 wire is available in reel-in-box packaging. This is very handy for pulling the wire without putting twists in it.  Without this kind of package or a cable reel stand, pulling wire is a two-person job.  Before the advent of the reel-in-box, we used to put a reel of wire on a broom handle to pull it.  One person would hold the broom handle and the other would pull and measure the cable.  You will produce a tangled mess, if you pull the wire off the end of the reel.

Stranded wire patch cables are often specified for cable segments running from a wall jack to a PC and for patch panels.  They are more flexible than solid core wire.  However, the rational for using it is that the constant flexing of patch cables may wear-out solid core cable--break it.  I don't think this is a real concern in the average small network.   For example, I have one solid core cable going to my workbench.  It has probably flexed and average person's lifetime of flexes from the many times I have connected customer computers to my network.   Also, stranded cable is susceptible to degradation from moisture infiltration, may use an alternate color code, and should not be used for cables longer than 3 Meters (about 10 feet).

Most of the wiring I do simply connects computers directly to other computers or hubs.  Solid core cable is quite suitable for this purpose and for many home and small business networks.   I find it also quite acceptable for use as patch cables.  You might consider stranded wire patch cables if you have a notebook computer you are constantly moving around.

CAT 5 cable has four twisted pairs of wire for a total of eight individually insulated wires.   Each pair is color coded with one wire having a solid color (blue, orange, green, or brown) twisted around a second wire with a white background and a stripe of the same color.   The solid colors may have a white stripe in some cables.  Cable colors are commonly described using the background color followed by the color of the stripe; e.g., white-orange is a cable with a white background and an orange stripe

 

 

CONNECTORS.  

 

The straight through and cross-over patch cables discussed in this article are terminated with CAT 5 RJ-45 modular plugs.  RJ-45 plugs are similar to those you'll see on the end of your telephone cable except they have eight versus four or six contacts on the end of the plug and they are about twice as big.  Make sure they are rated for CAT 5 wiring.  (RJ means "Registered Jack").  Also, there are RJ-45 plugs designed for both solid core wire and stranded wire.  Others are designed specifically for one kind of wire or the other.  Be sure you buy plugs appropriate for the wire you are going to use.  I use plugs designed to accommodate both kinds of wire.

 

COLOR-CODE STANDARDS

Again, please bear with me...  Let's start with simple pin-out diagrams of the two types of UTP Ethernet cables and watch how committees can make a can of worms out of them.  Here are the diagrams:

 

 

 

 

 

 

Note that the TX (transmitter) pins are connected to corresponding RX (receiver) pins, plus to plus and minus to minus.  And that you must use a crossover cable to connect units with identical interfaces.  If you use a straight-through cable, one of the two units must, in effect, perform the cross-over function.

Two wire color-code standards apply: EIA/TIA 568A and EIA/TIA 568B. The codes are commonly depicted with RJ-45 jacks as follows:

 

 

 

 

 

 

 

 

 

 

If we apply the 568A color code and show all eight wires, our pin-out looks like this:

 

 

 

 

 

 

 

 

 

 

 

 

 

Note that pins 4, 5, 7, and 8 and the blue and brown pairs are not used in either standard.  Quite contrary to what you may read elsewhere, these pins and wires are not used or required to implement 100BASE-TX duple Xing--they are just plain wasted.

However, the actual cables are not physically that simple.  In the diagrams, the orange pair of wires are not adjacent.  The blue pair is upside-down.  The right ends match RJ-45 jacks and the left ends do not.  If, for example, we invert the left side of the 568A "straight"-thru cable to match a 568A jack--put one 180° twist in the entire cable from end-to-end--and twist together and rearrange the appropriate pairs, we get the following can-of-worms:

 

 

 

 

 

 

 

 

 

 

 

 

This further emphasizes, I hope, the importance of the word "twist" in making network cables, which will work.  You cannot use a flat-untwisted telephone cable for a network cable.  Furthermore, you must use a pair of twisted wires to connect a set of transmitter pins to their corresponding receiver pins.  You cannot use a wire from one pair and another wire from a different pair.

Keeping the above principles in mind, we can simplify the diagram for a 568A straight-thru cable by untwisting the wires, except the 180° twist in the entire cable, and bending the ends upward.  Likewise, if we exchange the green and orange pairs in the 568A diagram we will get a simplified diagram for a 568B straight-thru cable.  If we cross the green and orange pairs in the 568A diagram we will arrive at a simplified diagram for a crossover cable.  All three are shown below.

 

 

 

 

 

 

 

 

 

 

 

 

 

NETWORK CABLE TOOLS

Modular Plug Crimp Tool

 

You will need a modular crimp tool.  This one is very similar to the one I have been using for many years for all kinds of telephone cable work and it works just fine for Ethernet cables.  You don't need a lot of bells and whistles, just a tool that will securely crimp RJ-45 connectors.  

Even though the crimpier has cutters, which can be used to cut the cable and individual wires, and possibly stripping the outer jacket, I find that the following tools are better for stripping and cutting the cable...

 

 

 

 

 

 

 

 

Universal UTP Stripping Tool (Eclipse)

 

I recently bought one of these tools and it works slick, and it makes a much neater cut. I recommend that you purchase one if you will be making many cables.

 

 

 

 

 

 

Diagonal Cutters

It is easier to use diagonal cutters ("diags" or "dikes") to cut the cable off at the reel and to fine tune the cable ends during assembly.  Also, if you don't have a stripper, you can strip the cable by using a small knife (X-acto, utility, etc.)  To carefully slice the outer jacket longitudinally and use the diags to cut it off around the circumference.

 

 

 

LET'S MAKE IT SIMPLE

 

There are only two unique cable ends in the preceding diagrams. They correspond to the 568A and 568B RJ-45 jacks and are shown to the right. 

 

 

 

 

 

 

Again, the wires with colored backgrounds may have white stripes and may be denoted that way in diagrams found elsewhere.  For example, the green wire may be labeled Green-White--I don't bother.  The background color is always specified first.

Now, all you need to remember, to properly configure the cables, are the diagrams for the two cable ends and the following rules:

A straight-thru cable has identical ends.

A crossover cable has different ends.

It makes no functional difference which standard you use for a straight-thru cable.   You can start a crossover cable with either standard as long as the other end is the other standard.  It makes no functional difference which end is which.  Despite what you may have read elsewhere, a 568A patch cable will work in a network with 568B wiring and 568B patch cable will work in a 568A network.  The electrons couldn't care less.

My preference is to use the 568A standard for straight-thru cables and to start crossover cables with a 568A end.  That way all I have to remember is the diagram for the 568A end, that a straight-thru cable has two of them, and that the green and orange pairs are swapped at the other end of a crossover cable.

LET'S MAKE SOME CABLES

1.       Pull the cable off the reel to the desired length and cut.  I have a box of cable at one end of my shop and a mark on the floor 10' away.  For cable lengths, which are a fraction of ten feet, I eyeball the length as I pull the cable out of the box (also, my feet are about one foot long).  For longer cables, I pull it out to the ten-foot mark and go back to the box and pull the remaining fraction or another ten feet.  If you are pulling cables through walls, a hole in the floor, etc., it easier to attach the RJ-45 plugs after the cable is pulled.  The total length of wire segments between a PC and a hub or between two PC's cannot exceed 100 Meters (328 feet or about the length of a football field) for 100BASE-TX (and 10BASE-T).

2.  Strip one end of the cable with the stripper or a knife and diags.  If you are using the stripper, place the cable in the groove on the blade (left) side of the stripper and align the end of the cable with the right side of the stripper.  This is about right to strip a little over 1/2" of the jacket off the cable.  Turn the stripper about one turn or so.  If you turn it much more, you will probably nick the wires.  The idea is to score the outer jacket, but not go all the way through.  Once scored, you should be able to twist the end of the jacket loose and pull it off with one hand while holding the rest of the cable with the other.  If you are using a knife and diags, carefully slit the cable for about an inch or so and neatly trim around the circumference of the cable with the diags to remove the jacket.

 

 

 

3.  Inspect the wires for nicks.   Cut off the end and start over if you see any.  You may have to adjust the blade with the screw at the front stripper.  Cable diameters and jacket thicknesses vary

 

 

 

 

 

4.       Spread and arrange the pairs roughly in the order of the desired cable end.

 

 

 

 

 

 

5.  Untwist the pairs and arrange the wires in the order of the desired cable end.  Flatten the end between your thumb and forefinger. Trim the ends of the wires so they are even with one another.  It is very important that the unscripted (untwisted) end be slightly less than 1/2" long.   If it is longer than 1/2" it will be out-of-spec and susceptible to crosstalk.  If it less than slightly less than 1/2" it will not be properly clinched when RJ-45 plug is crimped on..  Flatten again.  There should be little or no space between the wires.

 

 

 

 

6-      Hold the RJ-45 plug with the clip facing down or away from you.  Push the wire firmly into the plug.  Now, inspect the darn thing... before crimping and wasting the plug!  Looking through the bottom of the plug, the wire on the far left side will have a white background.  The wires should alternate light and dark from left to right.  The furthest right wire is brown.  The wires should all end evenly at the front of the plug.  The jacket should end just about where you see it in the diagram--right on the line.  Aren't you glad you didn't crimp the plug?

 

ALL ABOUT CRIMPING

7.  Hold the wire near the RJ-45 plug with the clip down and firmly push it into the left side of the front of the crimpier (it will only go in one way).  Hold the wire in place squeezes the crimpier handles quite firmly.  This is what will happen:

Crimp it once.)  The crimpier pushes two plungers down on the RJ-45 plug.  One forces what amounts to a cleverly designed plastic plug/wedge onto the cable jacket and very firmly clinches it.  The other seats the "pins," each with two teeth at its end, through the insulation and into the conductors of their respective wires.

 

8.       Test the crimp... If done properly an average person will not be able to pull the plug off the cable with his or her bare hands.  And that quite simply, besides lower cost, is the primary advantage of twisted-pair cables over the older thin wire, coaxial cables.  In fact, I would say the RJ-45 and ease of its installation is the main reason coaxial cable is no longer widely used for small Ethernets.  But, don't pull that hard on the plug.  It could stretch the cable and change its characteristics.  Look at the side of the plug and see if it looks like the diagram and give it a fairly firm tug to make sure it is crimped well.

 

9.       Prepare the other end of the cable so it has the desired end and crimp.

 

10.  If both ends of the cable are within reach, hold them next to each other and with RJ-45 clips facing away.  Look through the bottom of the plugs.  If the plugs are wired correctly, and they are identical, it is a straight-thru cable.  If they are wired correctly and they are different, it is a crossover cable.

11.  If you have an operational network, test the cable.   Copy some large files.

12.  If the cable doesn't work, inspect the ends again and make sure you have the right cable and that it is plugged into the correct units for the type of cable.  Try power-cycling (cold booting) the involved computers.

13.  If you have many straight-thru cables and a crossover cable in your system, you should consider labeling the crossover cable or using a different colored cable for the crossover cable so you don't mix them up.  I do not recommend implementing the crossover function, as recommended elsewhere, with two RJ-45 jacks, appropriately wired back to back, and two straight-thru cables.  This method costs noticeably more, introduces more than the necessary number of components and connections, increases the complexity and time of assembly, and decreases reliability.

CABLING RULES

1. Try to avoid running cables parallel to power cables.

2.  Do not bend cables to less than four times the diameter of the cable.

3.  If you bundle a group of cables together with cable ties (zip ties), do not over-cinch them.  It's         okay to snug them together firmly; but don't tighten them so much that you deform the cables.

4.  Keep cables away from devices, which can introduce noise into them.  Here's a short list: copy machines, electric heaters, speakers, printers, TV sets, fluorescent lights, copiers, welding machines, microwave ovens, telephones, fans, elevators, motors, electric ovens, dryers, washing machines, and shop equipment.

5.  Avoid stretching UTP cables (tension when pulling cables should not exceed 25 LBS).

6.  Do not run UTP cable outside of a building.  It presents a very dangerous lightning hazard!

7.  Do not use a stapler to secure UTP cables.  Use telephone wire/RG-6 coaxial wire hangers, which are available at most hardware stores. Larry

HOW TO MAKE YOUR OWN CAT 5 TWISTED-PAIR NETWORK CABLES

 

INTRODUCTION

The purpose of this article is to show you how to make the two kinds of cables, which can be used to network two or more computers together to form quick and simple home, or small office local area networks (LANs).  These instructions can also be used to make patch cables for networks with more complex infrastructure wiring.

The two most common unshielded twisted-pair (UTP) network standards are the10 MHz 10BASE-T Ethernet and the 100Mhz 100BASE-TX Fast Ethernet.  The 100BASE-TX standard is quickly becoming the predominant LAN standard.  If you are starting from scratch, to build a small home or office network, this is clearly the standard you should choose.  This article will show you how to make cables, which will work with both standards

 

 

 

 

 

 

 

 

 

 

 

 

 

LANS SIMPLIFIED.  A LAN can be as simple as two computers, each having a network interface card (NIC) or network adapter and running network software, connected together with a crossover cable

The next step up would be a network consisting of three or more computers and a hub.  Each of the computers is plugged into the hub with a straight-thru cable (the crossover function is performed by the hub).

 

 

Prepared By: Ihsanullah Kofi IT Support Eng (1/02/2008)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

linux چیست؟

گنو/لینوکس یا بطور مختصر لینوکس یک سیستم عامل است. شما هم اکنون سیستم عاملی روی رایانه خود دارید و می دانید که سیستم عامل، مهمترین برنامه ی رایانه است

که وظیفه ی مدیریت سایر برنامه ها، مدیریت سخت افزار و ارتباط میان نرم افزار با سخت افزار را بر عهده دارد. سیستم عامل شما ممکن است:

  • مایکروسافت ویندوز (نسخه ای از داس، یا ویندوز)
  • اپل مکینتاش
  • یونیکس
  • سیستم عامل های دیگری

لینوکس یک سیستم عامل شبیه یونیکس است که هسته ی آن توسط فردی با نام "لینوس توروالدس" که در اوایل دهه ی نود، دانشجوی دانشگاه هلسینکی فنلاند بود نوشته شد؛ و اولین نسخه ی آن در سال ۱۹۹۱ انتشار یافت. سپس نرم افزارهایی که پروژه ی "گنو" تهیه و تولید کرده بود، در کنار این هسته قرار گرفت، و سیستم عامل گنو/لینوکس را تشکیل داد. از آن جایی که بسیاری از نرم افزارهای مهم درون این سیستم، از محصولات پروژه ی گنو می باشد نام صحیح این سیستم عامل "گنو/لینوکس" است.

لینوکس مزیت های بسیاری نسبت به سایر سیستم عامل ها دارد. به طور خلاصه می توان نقاط قوت لینوکس را در زیر خلاصه کرد:

  • امنیت بسیار بالا به دلیل وجود دیواره ی آتش در هسته ی سیستم عامل
  • وجود لایه های امنیتی مختلف در آن
  • تعداد انگشت شمار ویروس های رایانه ای شناخته شده برای آن
  • پایداری بسیار بالا به دلیل طراحی درست هسته، بدین صورت که اشکال در یک نرم افزار باعث ناپایداری کل سیستم نمی گردد
  • سرعت بالای سیستم عامل، به دلیل در دسترس بودن کد متن برنامه، و در نتیجه امکان کامپایل برنامه برای یک سخت افزار خاص
  • قیمت بسیار پایین آن، از آن جایی که خود سیستم عامل و اکثر نرم افزار های آن به رایگان در دسترس می باشند

لینوکس همچنین یک هسته ی کاملا چند وظیفه ای می باشد، و حتی از سال ها قبل از آنکه ویندوز به وجود آمده باشد و در زمانی که داس یک سیستم عامل تک وظیفه ای بود؛ این سیستم توانایی اجرای چند وظیفه به صورت همزمان را داشت. لینوکس سیستمی چند کاربری است، و این مفهوم را به حد اعلای خود رسانده، زیرا چند کاربر، می توانند از یک سیستم به طور همزمان و بدون دخالت در وظایف یکدیگر استفاده کنند. نسخه ی فعلی هسته ی لینوکس که نسخه ی ۴/۲ است، توانایی بهره وری از ۸ پردازنده را به صورت همزمان دارد، و این قابلیت در نسخه ی بعدی آن به ۱۶ پردازنده خواهد رسید.

مهمترین برتری لینوکس، نسبت به سیستم عامل های انحصاری آن است که این سیستم نرم افزار آزاد است. هیچ شرکت، دولت یا گروهی صاحب لینوکس نیست. شرکت ها و گروه های بسیاری اقدام به تولید مجموعه نرم افزارهای خود، که "پخش" نام دارد کرده اند. تعداد دقیق پخش های لینوکس مشخص نیست، اما آنچه که مسلم است آن است که بیش از ۲۰۰ پخش ثبت شده ی لینوکس وجود دارد. هر شخصی، با کمی دانش فنی و کمی وقت می تواند سیستم عامل گنو/لینوکس خودش را درست کند. با این وجود اکثرا افراد ترجیح می دهند که از پخش های از پیش ساخته شده استفاده کنند.

از معروفترین پخش های لینوکس:

  • دبیان (Debian)
  • رد هت (RedHat)
  • مندریک (Mandrake)
  • سوزه (SuSE)
  • جنتو (Gentoo)

لینوکس هم همانند مایکروسافت ویندوز، و در حقیقت همانند تمام سیستم عامل های دیگر بر روی دیسک سخت نصب می شود. در حقیقت، لینوکس تقریبا روی هر نوع سازه ی رایانه ای قابل نصب است. از سازه ی IA32 که اکثر ما از آن استفاده می کنیم، و شامل پردازنده های Intel و AMD می شود، تا سازه ی PowerPC (پردازنده های IBM و Motorola)، سازه ی Sparc (پردازنده های Sun)، سازه ی Alpha (پردازنده های HP)، و سازه های دیگری که ذکر آن ها در اینجا ضرورتی ندارد. لینوکس همچنین می تواند به آسانی در کنار سایر سیستم عامل ها از جمله مایکروسافت ویندوز، بر روی یک رایانه قرار گیرد. در این حالت شما یک رایانه ی به اصطلاح Multi Boot خواهید داشت، و هنگام روشن نمودن رایانه، سیستم عامل مورد نظرتان را انتخاب کرده و وارد آن می شوید.

آغاز داستان

در سال 1991 در حالی که جنگ سرد رو به پایان میرفت و صلح در افق ها هویدا می شد، در دنیای کامپیوتر، آینده بسیار روشنی دیده می شد. با وجود قدرت سخت افزارهای جدید، محدودیت های کامپیوترها رو به پایان می رفت. ولی هنوز چیزی کم بود و این چیزی نبود جز فقدانی عمیق در حیطه سیستم های عامل.

داس، امپراطوری کامپیوترهای شخصی را در دست داشت. سیستم عامل بی استخوانی که با قیمت 50000 دلار از یک هکر سیاتلی توسط بیل گیتز (Bill Gates) خریداری شده بود و با یک استراتژی تجاری هوشمند، به تمام گوشه های جهان رخنه کرده بود. کاربران PC انتخاب دیگری نداشتند. کامپیوترهای اپل مکینتاش بهتر بودند. ولی قیمتهای نجومی، آنها را از دسترس اکثر افراد خارج می ساخت.

خیمه گاه دیگر دنیای کامپیوترها، دنیای یونیکس بود. ولی یونیکس به خودی خود بسیار گرانقیمت بود. آنقدر گرانقیمت که کاربران کامپیوترهای شخصی جرات نزدیک شدن به آنرا نداشتند. کد منبع یونیکس که توسط آزمایشگاه های بل بین دانشگاهها توزیع شده بود، محتاطانه محافظت می شد تا برای عموم فاش نشود. برای حل شدن این مسئله، هیچیک از تولید کنندگان نرم افزار راه حلی ارائه ندادند.

بنظر می رسید این راه حل به صورت سیستم عامل MINIX ارائه شد. این سیستم عامل، که از ابتدا توسط اندرو اس. تاننباوم (Andrew S. Tanenbaum) پروفسور هلندی، نوشته شده بود به منظور تدریس عملیات داخلی یک سیستم عامل واقعی بود. این سیستم عامل برای اجرا روی پردازنده های 8086 اینتل طراحی شده بود و بزودی بازار را اشباع کرد.

بعنوان یک سیستم عامل، MINIX خیلی خوب نبود. ولی مزیت اصلی آن، در دسترس بودن کد منبع آن بود. هرکس که کتاب سیستم عامل تاننباوم را تهیه می کرد، به 12000 خط کد نوشته شده به زبان C و اسمبلی نیز دسترسی پیدا می کرد. برای نخستین بار، یک برنامه نویس یا هکر مشتاق می توانست کد منبع سیستم عامل را مطالعه کند. چیزی که سازندگان نرم افزارها آنرا محدود کرده بودند. یک نویسنده بسیار خوب، یعنی تاننباوم، باعث فعالیت مغزهای متفکر علوم کامپیوتری در زمینه بحث و گفتگو برای ایجاد سیستم عامل شد. دانشجویان کامپیوتر در سرتاسر دنیا با خواندن کتاب و کدهای منبع، سیستمی را که در کامپیوترشان در حال اجرا بود، درک کردند. و یکی از آنها لینوس توروالدز (Linus Torvalds) نام داشت.

کودک جدید در افق

در سال 1991، لینوس بندیکت توروالدز (Linus Benedict Torvalds) دانشجوی سال دوم علوم کامپیوتر دانشگاه هلسینکی فنلاند و یک هکر خود آموخته بود. این فنلاندی 21 ساله، عاشق وصله پینه کردن محدودیت هایی بود که سیستم را تحت فشار قرار میدادند. ولی مهمترین چیزی که وجود نداشت یک سیستم عامل بود که بتواند نیازهای حرفه ای ها را براورده نماید. MINIX خوب بود ولی فقط یک سیستم عامل مخصوص دانش آموزان بود و بیشتر به عنوان یک ابزار آموزشی بود تا ابزاری قدرتمند برای بکار گیری در امور جدی.

در این زمان برنامه نویسان سرتاسر دنیا توسط پروژه گنو (GNU) که توسط ریچارد استالمن (Richard Stallman) آغاز شده بود، تحریک شده بودند. هدف این پروزه ایجاد حرکتی برای فراهم نمودن نرم افزارهای رایگان و در عین حال با کیفیت بود. استالمن خط مشی خود را از آزمایشگاه معروف هوش مصنوعی دانشگاه MIT با ایجاد برنامه ویرایشگر emacs در اواسط و اواخر دهه 70 آغاز نمود. تا اوایل دهه 80، بیشتر برنامه نویسان نخبه آزمایشگاههای هوش مصنوعی MIT جذب شرکتهای نرم افزاری تجاری شده بودند و با آنها قرارداد های حفظ اسرار امضا شده بود. ولی استالمن دیدگاه متفاوتی داشت. وی عقیده داشت برخلاف سایر تولیدات، نرم افزار باید از محدودیت های کپی و ایجاد تغییرات در آن آزاد باشد تا بتوان روز به روز نرم افزارهای بهتر و کارآمد تری تولید نمود.

با اعلامیه معروف خود در سال 1983، پروژه GNU را آغاز کرد. وی حرکتی را آغاز کرد تا با فلسفه خودش به تولید و ارائه نرم افزار بپردازد. نام GNU مخفف GNU is Not Unix است. ولی برای رسیدن به رویای خود برای ایجاد یک سیستم عامل رایگان، وی ابتدا نیاز داشت تا ابزارهای لازم برای این کار را ایجاد نماید. بنابراین در سال 1984 وی شروع به نوشتن و ایجاد کامپایلر زبان C گنو موسوم به GCC نمود. ابزاری مبهوت کننده برای برنامه نویسان مستقل. وی با جادوگری افسانه ای خود به تنهایی ابزاری را ایجاد نمود که برتر از تمام ابزارهایی که تمام گروههای برنامه نویسان تجاری ایجاد کرده بودند قرار گرفت. GCC یکی از کارآمد ترین و قویترین کامپایلرهایی است که تا کنون ایجاد شده اند.

تا سال 1991 پروزه GNU تعداد زیادی ابزار ایجاد کرده بود ولی هنوز سیستم عامل رایگانی وجود نداشت. حتی MINIX هم لایسنس شده بود. کار بر روی هسته سیستم عامل گنو موسوم به HURD ادامه داشت ولی به نظر نمی رسید که تا چند سال آینده قابل استفاده باشد. این زمان برای توروالدز بیش از حد طولانی بود.

در 25 آگوست 1991، این نامه تاریخی به گروه خبری MINIX از طرف توروالدز ارسال شد:

از: لینوس بندیکت توروالدز
به: گروه خبری MINIX
موضوع: بیشتر چه چیزی را می خواهید در MINIX ببینید؟
خلاصه: نظرخواهی کوچک در مورد سیستم عامل جدید من

با سلام به تمام استفاده کنندگان از MINIX
من در حال تهیه یک سیستم عامل رایگان فقط به عنوان سرگرمی و نه به بزرگی و حرفه ای GNU برای دستگاههای 386 و 486 هستم. این کار از آوریل شروع شده و درحال آماده شدن است. من مایلم تا نظرات کاربران را در مورد چیزهایی که در MINIX دوست دارند یا ندارند، جمع آوری کنم. زیرا سیستم عامل من حدودا شبیه آن است. مانند ساختار سیستم فایل مشابه و چیزهای دیگر... من اکنون bash نسخه 1.08 و GCC نسخه 1.40 را به آن منتقل کرده ام و به نظر می رسد که کار می کند. من در عرض چند ماه چیزی آزمایشی درست کرده ام و مایلم بدانم که کاربران بیشتر به چه قابلیتهایی نیاز دارند؟ من از هر پیشنهادی استقبال می کنم. ولی قول نمی دهم همه آنها را اجرا کنم. لینوس

همانطور که در این نامه پیداست، خود توروالدز هم باور نمی کرد که مخلوقش آنقدر بزرگ شود که چنین تحولی در دنیا ایجاد کند. لینوکس نسخه 0.01 در اواسط سپتامبر 1991 منتشر شد و روی اینترنت قرار گرفت. شور و اشتیاقی فراوان حول مخلوق توروالدز شکل گرفت. کدها دانلود شده، آزمایش شدند و پس از بهینه سازی به توروالدز بازگردانده شدند.

لینوکس نسخه 0.02 در پنجم اکتبر به همراه اعلامیه معروف توروالدز آماده شد:

از: لینوس بندیکت توروالدز
به: گروه خبری MINIX
موضوع: کدهای منبع رایگان هسته مشابه MINIX

آیا شما از روزهای زیبای MINIX 1.1 محروم شده اید؟ هنگامی که مردها مرد بودند و راه اندازهای دستگاه خود را خودشان می نوشتند؟ آیا شما فاقد یک پروزه زیبا هستید و می میرید تا سیستم عاملی داشته باشید تا بتوانید آنرا مطابق با نیازهای خود در آورید؟ اگر اینگونه است، این نامه برای شما نوشته شده است.
همانطور که ماه پیش گفتم من در حال کار بر بروی یک سیستم عامل رایگان مشابه MINIX برای کامپیوترهای 386 هستم. این سیستم عامل اکنون بجایی رسیده است که قابل استفاده است و مایل هستم که کدهای منبع را در سطح گسترده تر پخش نمایم. این نسخه 0.02 است ولی من موفق شده ام که نرم افزارهای Bash، GCC، GNU-Make، GNU-sed، Compress و غیره را تحت آن اجرا کنم. کدهای منبع این پروژه را میتوانید از آدرس nic.funet.fi با آدرس 128.214.6.100 در دایرکتوری pub/OS/Linux پیدا کنید. این دایرکتوری همچنین دارای چند فایل README و تعدادی باینری قابل اجرا تحت لینوکس است. تمام کدهای منبع ارائه شده است زیرا هیچ یک از کدهای MINIX در آن استفاده نشده است. سیستم را میتوانید همانطور که هست کامپایل و استفاده کنید. کدهای منبع باینری ها را هم میتوانید در مسیر pub/GNU پیدا کنید.

لینوکس نسخه 0.03 پس از چند هفته آماده شد و تا دسامبر، لینوکس به نسخه 0.10 رسید. هنوز لینوکس فقط چیزی کمی بیشتر از یک فرم اسکلت بود. این سیستم عامل فقط دیسکهای سخت AT را پشتیبانی می کرد و ورود به سیستم نداشت و مستقیما به خط فرمان بوت میشد. نسخه 0.11 خیلی بهتر شد. این نسخه از صفحه کلیدهای چند زبانه پشتیبانی می کرد، دیسکهای فلاپی و کارتهای گرافیکی VGA، EGA، هرکولس و... نیز پشتیبانی می شدند. شماره نسخه ها از 0.12 به 0.95 و 0.96 افزایش پیدا کرد و ادامه یافت. بزودی کد آن بوسیله سرویس دهنده های FTP در فنلاند و مناطق دیگر، در سرتاسر جهان منتشر شد.

مقایسه و توسعه

بزودی توروالدز با مقایسه هایی از طرف اندرو تاننباوم، معلم بزرگی که MINIX را نوشته بود، مواجه شد. تاننباوم برای توروالدز می نویسد:

"من بر این نکته تاکید دارم که ایجاد یک هسته یکپارچه در سال 1991 یک اشتباه پایه ای بود. خدا را شکر که شما شاگرد من نیستید، واگر نه برای چنین طرحی نمره بالایی نمی گرفتید."

توروالدز بعدا پذیرفت که این بدترین نکته در توسعه لینوکس بوده است. تاننباوم یک استاد مشهور بود و هرچه که می گفت واقعیت داشت. ولی وی در مورد لینوکس اشتباه می کرد. توروالدز کسی نبود که به این سادگی ها پذیرای شکست باشد.

تاننباوم همچنین گفته بود : "لینوکس منسوخ شده است".

اکنون نوبت حرکت نسل جدید لینوکس بود. با پشتیبانی قوی از طرف اجتماع لینوکس، توروالدز یک پاسخ مناسب برای تاننباوم فرستاد:

"شغل شما استاد دانشگاه و محقق بودن است و این بهانه خوبی برای برخی مغز خرابکنی های MINIX است."

و کار ادامه یافت. بزودی صدها نفر به اردوگاه لینوکس پیوستند. سپس هزاران نفر و سپس صدها هزار نفر. لینوکس دیگر اسباب بازی هکرها نبود. با پشتیبانی نرم افزارهای پروزه GNU، لینوکس آماده یک نمایش واقعی بود. لینوکس تحت مجوز GPL قرار داده شد. با این مجوز همه می توانستند کدهای منبع لینوکس را به رایگان داشته باشند، بر روی آنها مطالعه کرده و آنها را تغییر دهند. دانشجویان و برنامه نویسان آنرا قاپیدند. و خیلی زود تولید کنندگان تجاری وارد شدند. لینوکس به خودی خود رایگان بود و هست. کاری که این تولیدکنندگان انجام دادند، کامپایل کردن بخش ها و نرم افزارهای مختلف و ارائه آن بصورت یک فرمت قابل توزیع همانند سایر سیستم عاملها بود، تا مردم عادی نیز بتوانند از آن استفاده کنند. اکنون توزیع هایی مانند ردهت، دبیان و زوزه دارای بیشترین سهم کاربران در سرتاسر جهان هستند. با رابطهای گرافیکی کاربر جدید مانند KDE و GNOME، توزیع های لینوکس در بین مردم بسیار گسترش یافتند.

همچنین اتفاقات جالبی با لینوکس رخ می دهد. در کنار PC، لینوکس به روی اکثر پلاتفورم ها منتقل شده است. لینوکس تغییر داده شد تا کامپیوتر دستی شرکت 3Com یعنی PalmPilot را اجرا نماید. تکنولوژی کلاستر کردن این امکان را بوجود آورد تا بتوان تعداد زیادی از ماشینهای لینوکس را به یک مجموعه واحد پردازشی تبدیل نمود. یک کامپیوتر موازی. در آوریل 1996 محققین آزمایشگاههای ملی لوس آلاموس از 68 کامپیوتر مبتنی بر لینوکس برای پردازش موازی و شبیه سازی موج انفجار اتمی استفاده کردند. ولی بر خلاف ابر کامپیوترهای دیگر، هزینه آنها بسیار ارزان تمام شد. ابرکامپیوتر خود ساخته آنها با تمام تجهیزات و سخت افزارها 152000 دلار هزینه در بر داشت و این یک دهم هزینه یک ابرکامپیوتر تجاری است. این ابرکامپیوتر به سرعت 16 بیلیون محاسبه در ثانیه دست یافت و به رتبه 315 ام این ابرکامپیوتر جهان دست پیدا کرد و صد البته یکی از پایدارترین آنها بود. پس از سه ماه از آغاز فعالیت، هنوز بوت نشده بود.

بهترین موردی که امروزه برای لینوکس وجود دارد، طرفداران متعصب آن هستند. هنگامی که یک قطعه سخت افزاری جدید ارائه می شود، هسته لینوکس برای استفاده از آن تغییر داده می شود. برای مثال هنگام ارائه پردازنده 64 بیتی شرکت AMD هسته به سرعت چند هفته برای کار با آن آماده شد. اکنون لینوکس بر روی تمام انواع خانواده های سخت افزاری موجود اعم از PC، MAC، Alpha و انواع سخت افزارهای درونه ای قابل اجراست که آنرا برای استفاده در ماشین آلات صنعتی و آلات و ادواتی که نیاز به پردازش کامپیوتری دارند، بسیار مناسب نموده است. لینوکس با همان فلسفه و هدفی که در سال 1991 ایجاد شد، وارد هزاره جدید شده است.

توروالدز، هنوز یک انسان ساده است. بر خلاف بیل گیتر او یک میلیاردر نیست. پس از اتمام مطالعاتش وی به آمریکا رفت تا با شرکت Transmeta همکاری نماید. پس از انجام یک پروژه فوق سری که توروالدز یکی از اعضای فعال آن بود، ترانسمتا پردازنده Cruose را با بازار ارائه کرد. توروالدز هنوز پرطرفدار ترین و مشهورترین برنامه نویس جهان است. در حال حاضر توروالدز ترانسمتا را ترک نموده و با حمایت شرکتهای بزرگ به طور تمام وقت بر روی لینوکس کار می کند.

پس از یک دهه: لینوکس امروز

امروزه لینوکس بیش از یک دهه توسعه را پشت سر گذاشته است و یکی از سریع التوسعه ترین سیستم های عامل به شما می رود. از چند کاربر انگشت شمار در سالهای 1991 و 1992، امروزه میلیون ها کاربر از لینوکس استفاده میکنند. IBM که زمانی بزرگترین دشمن جماعت Open Source به شمار می رفت، اکنون سرمایه گذاری عظیمی در زمینه توسعه راه حل های Open Source تحت لینوکس نموده است. در حال حاضر تعداد توسعه دهندگانی که برای افزایش قابلیتهای لینوکس تلاش می کنند، روز به روز افزایش می یابد.

امروزه تعداد زیادی از شرکتها و موسسات حرفه ای تجاری، پشتیبانی از محصولات مبتنی بر لینوکس را بر عهده گرفته اند. اکنون دیگر استفاده از لینوکس در محیط ها اداری، پذیرفتن ریسک نیست. از نظر قابلیت اطمینان و پایداری و همچنین حفاظت در برابر انواع ویروس ها چیزی بهتر از لینوکس وجود ندارد. با تلاش شرکتهای بزرگی مانند ردهت استفاده از لینوکس در محیطهای تجاری توسعه فراوان یافته و اکنون تعداد زیادی از شرکتهای کوچک و بزرگ در حال استفاده از سرویس دهنده ها و ایستگاههای کاری مبتنی بر لینوکس هستند.

طلوع لینوکس روی میزی (Desktop Linux)


بزرگترین ایرادی که از لینوکس گرفته می شد چه بود؟ قبلا محیط تمام متنی لینوکس، بسیاری از کاربران را از استفاده کردن از آن بر حذر می داشت. با اینکه در استفاده از محیط متنی کنترل کامل سیستم در اختیار شماست، ولی این محیط اصلا برای کاربران عادی سیستم های کامپیوتری مناسب نیست. محیط های گرافیکی که بر پایه X-Window وجود داشتند نیز پاسخ گوی امکاناتی که سیستم عاملهای گرافیکی مانند ویندوز برای کاربران خود ارائه می کردند، نبودند. ولی از چند سال گذشته این وضعیت در حال تغییر بوده است. اکنون محیط های گرافیکی حرفه ای مانند KDE و GNOME تصویر لینوکس را کامل کرده اند. این محیطهای گرافیکی اکنون بسیار کاربر پسند و قدرتمند شده اند و وجود این سیستم هاست که امروزه کاربران عادی نیز می توانند از لینوکس استفاده کنند.

لینوکس در جهان سوم

ورود لینوکس به کشورهای جهان سوم تحولی ایجاد نموده است. قبل از وجود لینوکس کشورهای جهان سومی در زمینه کامپیوتر در سطح بسیار پایین تری قرار داشتند. هزینه سخت افزارها بسیار پایین آمده بود ولی هزینه نرم افزار برای این گونه کشورها همچنان کمر شکن بود. این امر باعث شد تا در بسیاری از این کشورها کپی غیر مجاز نرم افزارها گسترش پیدا کند که باعث میلیاردها دلار خسارت سالیانه میشود. یکی از عمده ترین دلایل این کار پایین بودن درآمد سرانه در این کشورهاست. هنگامی که مجموع درآمد سرانه سالیانه بیش از 200 تا 300 دلار نیست، هیچگاه امکان خرید یک سیستم عامل 100 دلاری وجود نخواهد داشت.

طلوع لینوکس و سایر تولیدات باز متن، این وضعیت را تغییر داده است. این امکان وجود دارد تا بتوان لینوکس را در کامپیوترهای قدیمی 486 و پنتیوم که اکنون در کشورهای توسعه یافته به تاریخ پیوسته اند ولی هنوز در کشورهای درحال توسعه از آنها استفاده میشود، اجرا نمود. همچنین استفاده از نرم افزارهای رایگان بازمتن گسترش یافته تا جلوی هزینه های سرسام آور نرم افزاری این کشورها را بگیرد. امروزه در کشورهای آسیایی، آفریقایی و آمریکای لاتین استفاده از لینوکس و نرم افزارهای بازمتن گسترش فراوانی یافته و با استفاده از خصلت ذاتی تغییر پذیری لینوکس، برای استفاده از زبانهای ملی این کشورها سفارشی شده است. امروزه مستندات لینوکس به اکثر زبانهای زنده جهان ترجمه شده اند.

از میزکار تا ابرکامپیوترها

هنگامی که توروالدز لینوکس را ایجاد نمود، این مخلوق جدید، فقط یک اسباب بازی تازه برای هکرها بود. ولی از زمان دستگاههای 386 که نخستین هسته لینوکس بر روی آنها اجرا می شد، لینوکس راه درازی را طی نموده است. یکی از مهمترین استفاده های امروزی لینوکس استفاده از آن در پردازش های سنگین موازی در ابرکامپیوترهاست. امروزه اکثر ابرکامپیوترهایی که در جهان ساخته می شوند، از لینوکس به عنوان سیستم عامل خود استفاده می کنند.

داستان ادامه دارد

حرکت لینوکس از یک پروزه هکری تا جهانی شدن یک انقلاب شگفت انگیز است. پروزه GNU که در اوایل دهه 1980 توسط ریچارد استالمن شروع شد، توسعه نرم افزارهای بازمتن را رهبری نمود. پروفسور اندرو تاننباوم و سیستم عامل MINIX او مطالعه سیستم عامل ها را از حالت تئوری به عملی تبدیل نمود و در نهایت همت و تلاش توروالدز منجر به تولد لینوکس شد. امروزه لینوکس دیگر یک پروزه هکری به شما نمی رود بلکه یک حرکت جهانی است که توسط میلیونها نفر برنامه نویس بازمتن و شرکتهای بزرگی مانند IBM حمایت می شود. لینوکس در تاریخ کامپیوتر به عنوان یکی از شگفت انگیز ترین محصولات تلاش بشری باقی خواهد ماند.

توکس پنگوئن: نشان عزیز لینوکس


نشان لینوکس یک پنگوئن است. برخلاف سایر سیستم عاملهای تجاری، این نشان زیاد جدی نیست! توکس نشانگر وضعیت بدون نگرانی حرکت لینوکس است. این نشان تاریخچه بسیار جالبی دارد. لینوکس در ابتدا فاقد هر گونه نشانی بود. هنگامی که توروالدز برای تعطیلات به استرالیا رفته بود، در دیداری که از یک باغ وحش داشت، هنگامی که می خواست با یک پنگوئن بازی کند، پنگوئن دست وی را گاز گرفت و همین ایده ای شد تا از پنگوئن به عنوان نشان لینوکس استفاده شود.

www.OsmanArrib.blogfa.com