Pages

Thursday, October 5, 2017

What is Dump and Types of Dump

What is Dump and Types of Dump

In computing, a dump, core dump , memory dump, or system dump consists of the recorded state of the working memory of a computer program at a specific time, generally when the program has crashed or otherwise terminated abnormally. Core dumps are often used to assist in diagnosing and debugging errors in computer programs

core dump created by netdump, diskdump, xendump, or kdump. In this post we will see more about Kernel dump (Kdump). 

What is Kernel Dump (Kdump)


Kdump is a kernel crash dumping mechanism that allows you to save the contents of the system's memory for later analysis. It relies on kexec tools package. 

This second kernel resides in a reserved part of the system memory that is inaccessible to the first kernel. The second kernel then captures the contents of the crashed kernel's memory (a crash dump) and saves it

a) How to enable Kdump

To enable Kdump we need to add below keyword-value in the boot.conf at kernel line. While enabling crash kernel we need to reserve the memory for crash kernel. You can  set it to either auto or specific value. It is recommended to use minimum of 128M for a machine with 2G memory or higher.


root (hd0,0)
kernel /vmlinuz-2.6.32-419.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-419.el6.x86_64.img


b) Configure Dump Location

Once the kernel crashes, the core dump can be captured to local filesystem or remote filesystem(NFS) based on the settings defined in /etc/kdump.conf

This file is automatically created when the kexec-tools package is installed.

path /var/crash
core_collector makedumpfile -c --message-level 1 -d 31

In the file:

To write the dump to a raw device, you can uncomment “raw /dev/sda5” and change it to point to correct dump location.

For NFS, you can uncomment “#net my.server.com:/export/tmp” and point to the current NFS server location.

c) Configure Core Collector

The next step is to configure the core collector in Kdump configuration file. It is important to compress the data captured and filter all the unnecessary information from the captured core file.

To enable the core collector, uncomment the following line that starts with core_collector.

# core_collector makedumpfile -c --message-level 1 -d 31

makedumpfile specified in the core_collector actually makes a small DUMPFILE by compressing the data. makedumpfile provides two DUMPFILE formats (the ELF format and the kdump-compressed format). By default, makedumpfile makes a DUMPFILE in the kdump-compressed format.

The kdump-compressed format can be read only with the crash utility, and it can be smaller than the ELF format because of the compression support.

The ELF format is readable with GDB and the crash utility.
-c is to compresses dump data by each page
-d is the number of pages that are unnecessary and can be ignored

If you uncomment the line #default shell then the shell is invoked if the kdump fails to collect the core. Then the administrator can manually take the core dump using makedumpfile commands.

d) Restart kdump Services

Once kdump is configured, restart the kdump services,
If you have any issues in starting the services, then kdump module or crashkernel parameter has not been setup properly. So, verify /proc/cmdline and make sure it reflects to include the crashkernel value.

How to check core dump

Crash utility is used to analyze the core file captured by kdump.

It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump.

You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel.

Launch the crash tool as shown below. Once after entering this you will get a cash prompt, where you can execute crash commands:

# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/`uname –r`/vmlinux

crash>

To view the Process when System Crashed

Execute ps command at the crash prompt, which will display all the running process when the system crashed.

crash> ps
   PID    PPID  CPU       TASK        ST  %MEM     VSZ    RSS  COMM
      1      0   0  ffff88013e7db500  IN   0.0   19356   1544  init
      2      0   0  ffff88013e7daaa0  IN   0.0       0      0  [kthreadd]
      3      2   0  ffff88013e7da040  IN   0.0       0      0  [migration/0]
      4      2   0  ffff88013e7e9540  IN   0.0       0      0  [ksoftirqd/0]
      7      2   0  ffff88013dc19500  IN   0.0       0      0  [events/0]

Crash is the utility allows you to interactively analyze a running Linux system as well.

How to manually Trigger the Core Dump

You can manually trigger the core dump using the following commands:

echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger

The server will reboot itself and the crash dump will be generated.

What is Primary, Extended and Logical Volumes

What is the need of Extended and Logical Partition

The original partitioning scheme for PC hard disks allowed only four partitions. This quickly turned out to be too little in real life, partly because some people want more than four operating systems (Linux, MS-DOS, OS/2, Minix, FreeBSD, NetBSD, or Windows/NT, to name a few), but primarily because sometimes it is a good idea to have several partitions for one operating system. For example, swap space is usually best put in its own partition for Linux instead of in the main Linux partition for reasons of speed.

To overcome this design problem, extended partitions were invented. This trick allows partitioning a primary partition into sub-partitions. The primary partition thus subdivided is the extended partition; the sub-partitions are logical partitions. They behave like primary partitions, but are created differently. There is no speed difference between them. By using an extended partition you can now have up to 15 partitions per disk.

The extended partitions is a way to get around the fact you can only have four primary partitions on a drive. You can put lots of logical partitions inside it

You will never see hda4 mounted, just hda5 and hda6, in this case. 

Note: 

Linux numbers primary partitions 1-4, logical partitions start at 5 and up, even if there are less than 4 primary partitions.

On an IDE drive you can have up to 63 partitions, 3 primary and 60 logical ( contained in one extended partition )

What is partition table

A partition table is a 64-byte data structure that provides basic information for a computer's operating system about the division of the hard disk drive (HDD).

Partition table is part of the master boot record (MBR), which is a small program that is executed when a computer boots (i.e., starts up) in order to find the operating system and load it into memory. 

A partition is a division of a HDD into logically independent sections. Primary partitions are the first four partitions on a HDD.

The partition table begins at the hexadecimal (i.e., base 16) position 0x1BE in the boot sector. It contains four entries, each of which is 16 bytes in length, one for each partition.



The partition table entry for each partition consists of six items: the active flag, with 0x00 for off and 0x80 for on (one byte); the starting head, cylinder and sector (three bytes); the filesystem descriptor (one byte); the ending head, cylinder and sector (three bytes); the starting sector relative to the disk beginning (four bytes); and the number of sectors in the partition (four bytes).

Where MBR and Partition table are stored

The MBR, and thus the partition table, is stored in the boot sector, which is the first physical sector of a HDD. A sector is a segment of a track on a magnetic disk (i.e., a floppy disk or a platter in a HDD).

A track is any of the concentric circles on the magnetic media on a disk or platter over which one magnetic head (i.e., a device used for reading and writing data on the disk) passes while the head is stationary but the disk is spinning. A platter is a thin, high-precision aluminum or glass disk that is coated on both sides with a high-sensitivity magnetic material and which is used by a HDD to store data.

The MBR reads the partition table to determine which partition is the active partition. The active partition is the partition that contains the operating system that a computer attempts to load into memory by default when it is booted or rebooted.

Thursday, September 14, 2017

What is Kernel

What is the Kernel ?

Kernel is a program that is the core of a computer's operating system, with complete control over everything in the system. It is the first program loaded on start-up. It handles memory and peripherals like keyboards, monitors, printers, and speakers.

A kernel connects the application software to the hardware of a computer. It is responsible for interfacing all of your applications that are running in “user mode” down to the physical hardware, and allowing processes to get information from each other using inter-process communication (IPC).

The critical code of the kernel is usually loaded into a protected area of memory, which prevents it from being overwritten by applications or other, more minor parts of the operating system. The kernel performs its tasks, such as running processes and handling interrupts, in kernel space. In contrast, everything a user does is in user space: writing text in a text editor, running programs in a GUI, etc. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness.

Different Types of Kernels

a) Monolithic
b) Microkernel &
c) Hybrid. 

a) Monolithic Kernel



In monolithic kernel the entire operating system works in kernel space. . Monolithic kernel is a single large process running entirely in a single address space. It is a single static binary file. All kernel services exist and execute in the kernel address space. Device drivers can be added to the kernel as modules

All the parts of a kernel like the Scheduler, File System, Memory Management, Networking Stacks, Device Drivers, etc., are maintained in one unit within the kernel in Monolithic Kernel

Examples : Linux is a monolithic kernel while OS X (XNU) and Windows 7 use hybrid kernels. 

Pros

More direct access to hardware for programs. Easier for processes to communicate between eachother. If your device is supported, it should work with no additional installations. Processes react faster because there isn’t a queue for processor time

Cons

Large install footprint
Large memory footprint
Less secure because everything runs in supervisor mode

Info: What is System Call. When a process makes requests of the kernel, it is called a system call. A system call is a mechanism that is used by the application program to request a service from the operating system.


The kernel's interface is a low-level abstraction layer. When a process makes requests of the kernel, it is called a system call. Kernel designs differ in how they manage these system calls and resources. A monolithic kernel runs all the operating system instructions in the same address space, for speed. A microkernel runs most processes in user space, for modularity.

b) Microkernel 

It is also known as μ-kernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).

If the hardware provides multiple rings or CPU modes, the microkernel may be the only software executing at the most privileged level, which is generally referred to as supervisor or kernel mode. Traditional operating system functions, such as device drivers, protocol stacks and file systems, are typically removed from the microkernel itself and are instead run in user space.

In microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space. All servers are kept separate and run in different address spaces. Servers invoke "services" from each other by sending messages via IPC (Interprocess Communication). This separation has the advantage that if one server fails, other servers can still work efficiently.


Only the very important parts like IPC(Inter process Communication), basic scheduler, basic memory handling, basic I/O primitives etc., are put into the kernel. Communication happen via message passing. Others are maintained as server processes in User Space

Examples of micro kernels would be: Mach QNX AmigaOS Minix – zeitue. GNU Hurd is a great example of an OS running on a microkernel. It's still in active development and some popular Linux distros have a Hurd port 

Both Mac OS X and Windows are hybrid kernel as in more closely related to a monolithic kernel.

Pros

Portability, Small install footprint, Small memory footprint, Security

Cons

Hardware is more abstracted through drivers, Hardware may react slower because drivers are in user mode, Processes have to wait in a queue to get information, Processes can’t get access to other processes without waiting

Wednesday, September 13, 2017

What is Module and How it works

What is Module


Modules are pieces of code that can be loaded and unloaded into the kernel upon demand. They extend the functionality of the kernel without the need to reboot the system. For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system. Without modules, we would have to build monolithic kernels and add new functionality directly into the kernel image. Besides having larger kernels, this has the disadvantage of requiring us to rebuild and reboot the kernel every time we want new functionality.

modprobe doesn't communicate through netlink or sysfs (both features are younger than module loading), it calls the init_module system call.

Path


Modules are stored in /usr/lib/modules/kernel_release. You can use the command uname -r to get your current kernel release version.

Note: Module names often use underscores (_) or dashes (-); however, those symbols are interchangeable, both when using the modprobe command and in configuration files in 
/etc/modprobe.d/.

Commands 

To show what kernel modules are currently loaded:

$ lsmod
To show information about a module:

$ modinfo module_name
To list the options that are set for a loaded module:

$ systool -v -m module_name
To display the comprehensive configuration of all the modules:

$ modprobe -c | less
To display the configuration of a particular module:

$ modprobe -c | grep module_name
List the dependencies of a module (or alias), including the module itself:

$ modprobe --show-depends module_name


To load a module:
# modprobe module_name

To load a module by filename (i.e. one that is not installed in /lib/modules/$(uname -r)/):
# insmod filename [args]

To unload a module:
# modprobe -r module_name

Or, alternatively:


# rmmod module_name

The uevent message contains information about the device (example). This information contains registered vendor and model identification for devices connected to buses such as PCI and USB. Udev parses these events and constructs a fixed-form module name which it passes to modprobe. modprobe looks under /lib/modules/VERSION for a file called depmod.alias which is generated when the kernel is installed and that maps the fixed-form module names to actual driver module file names. See Are driver modules loaded and unloaded automatically? for more details about the process — that answer describes the earlier days when the kernel called modprobe directly, but the way modprobe and module aliases work hasn't changed.

modprobe loads a module by calling the init_module system call. It doesn't interact with sysfs in any way. When the module is loaded, the kernel creates an entry for it in /sys/module. Any entry that appears elsewhere in sysfs is up to the code in the module (e.g. a module with a driver for a type of USB devices will call some generic USB support code that adds an entry under /sys/bus/usb/drivers).
                   
When modprobe dynamically loads kernel module does that module (driver) then appear at /sys/bus/drivers directory ? Also does modprobe communicate back with the kernel through netlink socket ? Does it communicate back to sysfs ?           

Automatic module handling

Today, all necessary modules loading is handled automatically by udev, so if you do not need to use any out-of-tree kernel modules, there is no need to put modules that should be loaded at boot in any configuration file. However, there are cases where you might want to load an extra module during the boot process, or blacklist another one for your computer to function properly.

Kernel modules can be explicitly loaded during boot and are configured as a static list in files under /etc/modules-load.d/. Each configuration file is named in the style of /etc/modules-load.d/<program>.conf. Configuration files simply contain a list of kernel modules names to load, separated by newlines. Empty lines and lines whose first non-whitespace character is # or ; are ignored.

/etc/modules-load.d/virtio-net.conf
# Load virtio-net.ko at boot
virtio-net
See modules-load.d(5) for more details.

Note: If you have upgraded your kernel but have not yet rebooted, modprobe will fail with no error message and exit with code 1, because the path /lib/modules/$(uname -r)/ no longer exists. Check manually if this path exists when modprobe failed to determine if this is the case.

Setting module options

To pass a parameter to a kernel module, you can pass them manually with modprobe or assure certain parameters are always applied using a modprobe configuration file or by using the kernel command line.

Manually at load time using modprobe
The basic way to pass parameters to a module is using the modprobe command. Parameters are specified on command line using simple key=value assignments:

# modprobe module_name parameter_name=parameter_value
Using files in /etc/modprobe.d/

Files in /etc/modprobe.d/ directory can be used to pass module settings to udev, which will use modprobe to manage the loading of the modules during system boot. Configuration files in this directory can have any name, given that they end with the .conf extension. The syntax is:

modprobe is a Linux program originally written by Rusty Russell and used to add a loadable kernel module (LKM) to the Linux kernel or to remove a LKM from the kernel. It is commonly used indirectly: udev relies upon modprobe to load drivers for automatically detected hardware.[citation needed]

The modprobe program offers more full-featured "Swiss-army-knife" features than the more basic insmod and rmmod utilities, with the following benefits:

an ability to make more intuitive decisions about which modules to load
an awareness of module dependencies, so that when requested to load a module, modprobe adds other required modules first
the resolution of recursive module dependencies as required
If invoked with no switches, the program by default adds/inserts/installs the named module into the kernel. Root privileges are typically required for these changes.

Any arguments appearing after the module name are passed to the kernel (in addition to any options listed in the configuration file).

In some versions of modprobe, the configuration file is called modprobe.conf, and in others the equivalent is the collection of files called <modulename> in the /etc/modprobe.d directory.

Features

The modprobe program also has more configuration features than other similar utilities. It is possible to define module aliases allowing for some automatic loading of modules. When the kernel requires a module, it actually runs modprobe to request it; however, the kernel has a description of only some module properties (for example, a device major number, or the number of a network protocol), and modprobe does the job of translating that to an actual module name via aliases.

This program also has the ability to run programs before or after loading or unloading a given module; for example, setting the mixer right after loading a sound card module, or uploading the firmware to a device immediately prior to enabling it. Although these actions must be implemented by external programs, modprobe takes care of synchronizing their execution with module loading/unloading.

Blacklist

There are cases where two or more modules both support the same devices, or a module invalidly claims to support a device: the blacklist keyword indicates that all of a particular module's internal aliases are to be ignored.

There are a couple of ways to blacklist a module, and depending on the method used to load it depends on where this is configured.

There are two ways to blacklist a module using modprobe, employing the modprobe.conf system, the first is to use its blacklisting system in /etc/modprobe.d/blacklist:

cat /etc/modprobe.d/blacklist
blacklist ieee1394
blacklist ohci1394
blacklist eth1394
blacklist sbp2

Alternately, you can modify /etc/modprobe.conf:

alias sub_module /dev/null
alias module_main /dev/null
options module_main needed_option=0

How to enable SFTP and disable SSH

Sometimes you want to have users, that have access to files on your server, but don't want them to be able to log in and execute commands on your server.This is done quite easily.

Add user as usually and assign him a password. Then run the following command (replace the 'username' with real user name):

Step 1: Modify the Users Shell

root@host # usermod -s /usr/lib/sftp-server username
This changes user's shell to sftp-server.

The last step for this to work is to add '/usr/lib/sftp-server' to /etc/shells to make it a valid shell, eg. like this:

root@host # echo '/usr/lib/stfp-server' >> /etc/shells

Step 2: Modify the Users Shell

There. Now you've setup a user who can only access your server with SFTP.

Subsystem sftp internal-sftp

And then block other uses:
Match group sftponly
     ChrootDirectory /home/%u
     X11Forwarding no
     AllowTcpForwarding no
     ForceCommand internal-sftp

Add your users to the sftponly group. You have to change the user's homedirectory to / because of the chroot and /home/user should be owned by root. I'd also set /bin/false as the user's shell.

Step 2: Directory Structure & Permissions

Lets setup the correct directory structure and file system security.

Just a quick note regarding the ChrootDirectory value. As you can see I’ve used /var/www/andrew however this is not the document root of the website. There is a subfolder called webroot (/var/www/andrew/webroot) which is where the user would store all their web documents. I’ve found that you’ll get unexpected results if you try drop the user directly into a directory in which they have owner/group permissions.

So we chroot the user to /var/www/andrew, however we don’t give the user andrew access other than read and execute permissions on that directory.

# ls -lad /var/www/andrew
drwxr-xr-x 3 root root 4096 Jan  7 13:24 /var/www/andrew
To configure the above permissions run:

# chown root:root /var/www/andrew ; chmod 755 /var/www/andrew
Now lets look at the file permissions of the actual webroot folder (this is where the users working web documents would be stored).

# ls -lad /var/www/andrew/webroot
drwxrwxr-x 7 andrew andrew 4096 Jan  7 13:39 /var/www/andrew/webroot
To configure the above settings run:

# chown andrew:andrew /var/www/andrew/webroot ; chmod 775 /var/www/andrew/webroot

Difference between Internal-SFTP and SFTP Server

Internal SFTP

The internal-sftp was added much later than the standalone sftp-server binary, but it is the default by now. It supports everything that the sftp-server does and has an  advantage that it doesn’t require any support files when used with ChrootDirectory.

Another advantage is performance, as it's not necessary to run a new process for the SFTP. I believe there's no reason to use the sftp-server for new installations.

SFTP Server

The sftp-server is still kept for backward compatibility for installations that rely on it.

For example, in case the administrator relies on a login shell configuration to prevent certain users from logging in. Switching to the internal-sftp would bypass the restriction, as the login shell is no longer involved.

Using sftp-server binary (being a standalone process) you can also use some hacks, like running the SFTP under su sudo.

With SFTP protocol, you can use SFTP server option on SFTP page of Advanced Site Settings dialog to execute SFTP binary under a different user. With OpenSSH server, you can specify: sudo /bin/sftp-server

Tuesday, August 29, 2017

List PID of the respective User and Hide others

If you are using Linux kernel version 3.2+ (or RHEL/CentOS v6.5+ above) you can hide process from other users. Only root can see all process and user only see their own process. All you have to do is remount the /proc filesystem with the Linux kernel hardening hidepid option.

Say hello to hidepid option

This option defines how much info about processes we want to be available for non-owners. The values are as follows:

hidepid=0 – The old behavior – anybody may read all world-readable /proc/PID/* files (default).

hidepid=1 – It means users may not access any /proc// directories, but their own. Sensitive files like cmdline, sched*, status are now protected against other users.

hidepid=2 It means hidepid=1 plus all /proc/PID/ will be invisible to other users. It compicates intruder’s task of gathering info about running processes, whether some daemon runs with elevated privileges, whether another user runs some sensitive program, whether other users run any program at all, etc.

Linux kernel protection: Hiding processes from other users

Type the following mount command:

# mount -o remount,rw,hidepid=2 /proc

Edit /etc/fstab, enter:

# vi /etc/fstab

Update/append/modify proc entry as follows so that protection get enabled automatically at server boot-time:

proc    /proc    proc    defaults,hidepid=2     0     0
Save and close the file.
$ ps -ef
$ sudo -s
# mount -o remount,rw,hidepid=2 /proc
$ ps -ef
$ top

How to Enable chroot in DNS

A chroot jail is a way to isolate a process and its children from the rest of the system. It should only be used for processes that don't run as root, as root users can break out of the jail very easily.

The idea is that you create a directory tree where you copy or link in all the system files needed for a process to run. You then use the chroot() system call to change the root directory to be at the base of this new tree and start the process running in that chroot'd environment. Since it can't actually reference paths outside the modified root, it can't perform operations (read/write etc.) maliciously on those locations.

On Linux, using a bind mounts is a great way to populate the chroot tree. Using that, you can pull in folders like /lib and /usr/lib while not pulling in /user, for example. Just bind the directory trees you want to directories you create in the jail directory.

Setup Bind DNS Server in Chroot Jail on CentOS 7

1. Install Bind Chroot DNS server :

# yum install bind-chroot -y

2. Initialize the /var/named/chroot environment by running:
# /usr/libexec/setup-named-chroot.sh /var/named/chroot on

Monday, August 14, 2017

Difference Between for Loop and While Loop

While loop iterates until the condition is no longer true. For loop can be used for fixed number of iterations. While loop can be used when you don’t know in advance how many iterations you need.

While loop is like a combination of an if statement and for loop.While loop iterates while a condition is true. When the condition resolved to false, while loop stops.

Syntax of While

while [test_condition]
do
  commands
done

Syntax of for

a) In Bash format

for variable in list_of_items
do
   commands
done   

b) In C format

Max=upper_limit
for ((i=1; i<=max; i++)
do
  commands
done

Friday, August 11, 2017

Difference between NFS v2, v3 and v4

S.No
 Feature
NFS v2
NFS v3
NFS v4
1
Authentication
AUTH_SYS
AUTH_SYS
Kerberos
2
Parallel NFS
NO
No
Yes
3
State
Stateless
Stateless
Statefull
4
RPC Call Type
Single
Single
Compound
5
Exports


All exports can be mounted together in a directory tree structure as part of a pseudo-filesystem
6
Locking
NLM
NLM
Inbuilt in NFSv4 Protocol

Breif Explanantions

1) Authentication

Kerberos is a computer network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner

2) Parallel NFS

That is, when a server implements pNFS, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.

3) State

A data structure change_info is returned by CREATE, LINK, OPEN, and REMOVE calls so the clients become aware about concurrent operations done on the NFS directories and files by the other clients. Clients can perform efficient caching and discard cache depending on the change_info

4) RPC Call type

This mix of a typical NFS set of RPC calls in versions prior to NFSv4 requires each RPC call is a separate transaction over the wire. NFSv4 avoids the expense of single RPC requests and the attendant latency issues and allows these calls to be bundled together. For instance, a lookup, open, read and close can be sent once over the wire, and the server can execute the entire compound call as a single entity. The effect is to reduce latency considerably for multiple operations.

5) Exports

In NFS v4 Servers, rather than exporting multiple file systems, export a single "pseudo file system," formed from multiple actual file systems, and customized for each client.

6) Locking

NFSv3 relied on Network Lock Manager (NLM) for file locking . NLM was itself a separate protocol, so file locking glued together rather than being a core part of the file access protocol. NFSv4 changes that.

From Client : nfsstat -m