Saturday 24 September 2011

Kernel: Monolithic vs Micro


Kernel: Monolithic vs Micro

Kernel is the core of the Operating System(OS), and roughly consists of two parts: the kernel space (privileged mode) and the user space (unprivileged mode)without that, protection between the processes would be impossible.

There are two different concepts of kernel: Monolithic kernel and Micro-kernel. The older approach is the Monolithic kernel, of which Unix, MS-DOS and Mac OS are typical represents of. It runs basic system services like process, memory management, interrupt handling, I/O communication, file systems in kernel space.

Monolithic kernel

The inclusions of all basic services in kernel space has three disadvantages:

Ø       The kernel size
Ø       Lack of extensibility
Ø       Bad maintainability

Bug fixing or addition of new featured means re-compilation of the whole kernel which consumes lot of time as well as lot of memory.

To overcome the above disadvantages micro-kernels was introduced in late 1980’s. The concept was to reduce kernel to basic process and I/O, memory system services reside in the user space (they are called as servers). There are servers for managing issues, another server for managing drivers and so on... Communications happens through Context switching where user processes are allowed to kernel space and then exit. Messaging system was introduced which has independent communication for process rather than context switching which consumes more memory.

Micro-kernel


First generation of micro-kernel was had a lot of drawback concerning IPC, device drivers, communication stacks which resulted in larger kernel size which was slower in execution.

Later was thought to create a pure micro-kernel where it fits to processor’s first level cache. Second generation micro-kernel (L4) was highly optimized which results in a very good I/O performance.


Important Parameters in Comparisons:

Memory management:

Monolithic kernel:
Monolithic kernels implement everything needed for memory management in kernel space. This includes allocation strategies, virtual memory management, and page replacement algorithms.

Monolithic - Memory Management

Micro-kernel:
Micro-kernel L4 has got three memory management primitives: map, grant and flush. A process maps memory pages to another process if he wants to share these pages. When a process grants pages to another process, he cannot access them anymore. Flushing regains granted and mapped memory pages.

Micro-kernel - Memory Management


The micro-kernel reserves the whole system memory at startup to one process, the base system process, which resides (like all other processes) in user space. If a process needs memory, he doesn't have to take the way through the kernel anymore, but directly asks the base system process. Because every process can only grant/map/flush the memory pages it owned before, memory protection still exists.

I/O Communication:

Monolithic kernel:
I/O communication works through interrupts, issued by or sent to the hardware. Monolithic kernels run device drivers inside the kernel space. Hardware interrupts are directly handled by kernel processes. To add or change features provided by the hardware, all layers above the changed layer in the monolithic kernel also have to be changed in the worst case.

The concept of so called modules was introduced to achieve more independence and separation from the kernel. One module represents (parts of) a driver and is (un)loadable during run time. That way, drivers which are not needed by the system are not loaded and memory is preserved. But kernel modules are still binary kernel-dependent. If concepts change too much inside the monolithic kernel, modules need not just a recompilation, but a complete code adoption.

Micro-kernel:
The micro-kernel approach doesn't handle I/O communication directly, It only ensures the communication. Requests from or to the hardware are redirected as messages by the micro-kernel to the servers in user space. If the hardware triggers an interrupt, the micro kernel sends a message to the device driver server, and has nothing more to do about it. The device driver server takes the message and sends it to the right device driver. That way it is possible to add new drivers, exchange the driver manager without exchanging drivers or even exchange the whole driver management system without changing any other part of the system.

Tuesday 20 September 2011

Linux: A bit of history

MINIX(1984), created from scratch by Andrew Tanenbaun, for educational purposes in order to reach how to design and implement  OS.

Andrew Tanenbaun

MINIX was mainly designed for educational purpose rather than professional activities. MINIX was mainly running on most successful 8086 platform. The advantage of this kernel was the source code was available to anyone from Tanenbaun's teaching books on OS.

1990, FSF(Free Software Foundations) and GNU project, motivated many programmers to promote quality and free distributed software's.Aside from software, work was being done on the kernel OS known as HURD.

1991, Finnish student Linux Torvalds presented version 0.0.1 of his kernel which he called as "Linux" designed for i386 architectures and offered GPL license to community of programmers, Internet community for testing,they liked it and stared helping with its development.

Linux Torvalds

Distinguish Linux from other OS(UNIX) :


1. Open Source: Anyone can have access to source code, change them and create new version that can be shared under GPL license.


2. Portability: Independent of any architecture but with a 'C' compiler such as GNU gcc.GNU/Linux runs almost all architectures like Intex X86, IA64, AMD x86, x86_64, Sun's SPARC, Power PC, IBM S390, ARM ....


3. Monolith type kernel: Design of the kernel is joined into a single piece but is conceptually modular in its different tasks. Problem with monolith was when they grow they become very large and untreatable for development; DDL were used to try to resolve this.


4. DLL: These make it possible to have the parts of OS, such as filesystems, devices, as external parts that are loaded(or linked) with the kernel at run-time on demand.
This would simplify the kernel as these functionalities as elements that can be separately programmed.


5. Projects succeeded with Linux kernel:
    - people of FSF, with the GNU utility software and above all with GCC C compiler joined projects like XFree, Gnome, Kde.
    - Internet development with projects like Apache web server, Mozilla navigator, MySQL, and PostgreSQL databases ended up giving Linux kernel sufficient coverage to compete with proprietary systems.

New companies that created GNU/Linux distributions(packaging of kernel + applications) and supported it such as Red Hat, Mandrake, Suse, made an unstoppable growth that we are witnessing today.

Highlights of GNU's contribution:

    - C and C++ compilers
    - bash shell
    - Emacs editor
    - Postscript interpreter
    - Standard C library (glibc)
    - Debugger (gdb)
    - Makefile (GNU make)
    - Assembler
    - Linker

In further article we see how above scientists made an approach and what would be the functionality of Monolithic and Micro-kernel
 

Friday 16 September 2011

UNIX: A bit of history

UNIX started back in 1969 in BTL(Bell Telephone Labs) and AT&T.These were just withdrawn from a project called MULTICS,which was designed to create an OS to support thousands of users simultaneously.BTL,General Electric,MIT were involved in this project,as it failed MIT was withdrawn from the project


Two engineers who were involved in MULTICS were Ken Thompson and Dennis Ritchie, found a computer which had an assembler and loading program.They developed tests on these assembler with the kernel.
8684d799768f77fb2278c7e01df7003e_1M.png
Ken Thompson(Left) and Dennis Ritchie


1969, Thompson had an idea of writing a file system for the created kernel so that files are stored in an ordered form in a hierarchical way. As progress made on the system design few more BTL engineers joined the project. The system became too small to work on and to purchase PDP machine with an agreement of creating a new text processor.


when new machine arrived they gave only CPU and memory not disk or OS.Thompson unable to wait designed a RAM disk in memory and used half of memory as disk other half for OS that he was designing. Once the disk arrived they stared working on OS and promised text processor(Troff)which was used to create UNIX "man pages". BTL started using UNIX with new text processor.


Another important characteristic was that UNIX was independent of hardware architecture. 1971, external users wanted to document what was being done resulted in UNIX programmers manual signed by Thompson and Dennis. UNIX installations continued to grow to about 50


End 1973,decision to present results at a conference on OS, so various IT  centers, universities asked for copies of UNIX. AT&T did not support, so users had to unite and share their knowledge by forming community(USENIX). AT&T decided to cede UNIX to universities,but no support.


University to obtain license was Berkley where Thompson studied.1975 Thompson returned to Berkley as a teacher bringing with him the latest kernel.Two newly graduated students, Chuck Haley and Bill Joy joined him and started to work together in UNIX implementation.





BillJoy.jpg
Bill Joy



The major disappoinment were the editors, Joy perfected an editor called "EX" later the same into "vi".Two developed Pascal language compiler which was added into UNIX.Demand continued to grow for UNIX which made Joy to produce "BSD UNIX"


1978, BSD had a license regarding its price for distribution so that new users could end making some changes or incorporating features,selling copies after certain period of time.


Joy made further changes to "vi" in such a way that editor was independent of terminal. He created the TERMCAP system programs could get executed irrespective of the terminals using interface. 1977, UNIX was running on PDP machines, that year adaptations were made for machines of the time such as Interdata and IBM.More versions offered as it included:awk,lint,make,uucp,C compiler designed by Kernighan and Dennis Ritchie which had created to re-write most UNIX that was initially running in assembler.Also included with bash,find,cpio,expr


UNIX started to appear from companies such as Xenix,Berkely. AT&T realised that UNIX was only a commercial product and its license prohibited its study in academic institutions in order to protect its commercial secret. until then UNIX source code was used in universities to teach OS


Every one found their own solution for solving problems.Andrew Tanenbaum decided to write a new UNIX-compactable OS without using a sinlge line of AT&T code.He called this OS as MINIX.


Bill Joy decided to leave to a new company called SUN Microsystems where he modified the BSD 4.2 to newly created SUN's UNIX (SunOS). Every company stared using their own UNIX.


- IBM     - AIX
- DEC     - Ultrix
- HP      - HPUX
- SGI     - IRIX
- Apple   - MacOS X


AT&T released final version called UNIX System V(SV),as well as BSD 4.x.Current UNIX versions are either running on SV or BSD.some manufactures specify their UNIX in BSD or SV style.Later which UNIX standards were drawn up hence we find IEEE POSIX, UNIX 97, FHS .. etc