Follow LinuxTechi - Linux Tutorials , CommandsLinux Tutorials , Commands on Feedspot

Continue with Google
Continue with Facebook



Many times we need to work with remote Linux systems. We login to the remote host, perform work and exit that session. Can we perform all these actions from local machine ? Yes, it’s possible and this tutorial demonstrates it with exhaustive examples.

Command execution over SSH

SSH allows us to execute command on remote machine without logging into that machine. In this tutorial we’ll discuss various ways to achieve this.

Execute single command

Let us execute uname command over SSH.

$ ssh linuxtechi@ uname

If you observe above command, it is similar to regular SSH command with minor difference. We have appended command to be executed (highlighted in red color).

When we execute this command. It’ll generate below output:

Execute multiple commands

Using this technique, we can execute multiple commands using single SSH session. We just need to separate commands with semicolon (;).

$ ssh linuxtechi@ uname; hostname; date

As expected, these commands will generate below output:

Thu Mar  1 15:47:59 IST 2018
Execute command with elevated privileges

Sometimes we need to execute command with elevated privileges, in that case we can use it with sudo.

$ ssh -t linuxtechi@ sudo touch /etc/banner.txt

Note that we have used ‘-t‘ option with SSH, which allows pseudo-terminal allocation. sudo command requires interactive terminal hence this option is necessary.

Execute script

Remote execution is not only limited to the commands; we can even execute script over SSH. We just have to provide absolute path of local script to SSH command.

Let us create a simple shell script with following contents and name it as system-info.sh


Make script executable and run it on remote server as follows:

$ chmod +x system-info.sh
$ ssh linuxtechi@ ./system-info.sh

As some of you might have guessed, it will generate below output:

Variable expansion problem

If we split commands into multiple lines, then variable expansion will not work. Let us see it with simple example:

$ msg="Hello LinuxTechi"
$ ssh linuxtechi@ 'echo $msg'

When we execute above command, we can observe that variable is not getting expanded.

To resolve this issue, we need to use -c option of shell. In our case we’ll use it with bash as follows:

$ ssh linuxtechi@ bash -c "'echo $msg'"
Configure password-less SSH session

By default, SSH will ask for password authentication each time. This is enforced for security reasons. However, sometimes it is annoying. To overcome this, we can use public-private key authentication mechanism.

By default, SSH will ask for password authentication each time. This is enforced for security reasons. However, sometimes it is annoying. To overcome this, we can use public-private key authentication mechanism.

It can be configured using following steps:

1) Generate public-private key pair

SSH provides ssh-keygen utility which can be used to generate key pairs on local machine.

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/linuxtechi/.ssh/id_rsa): #press enter
Enter passphrase (empty for no passphrase):                         #press enter
Enter same passphrase again:                                        #press enter
Your identification has been saved in /home/linuxtechi/.ssh/id_rsa.
Your public key has been saved in /home/linuxtechi/.ssh/id_rsa.pub.

Above output shows that generated key pairs are stored under ~/.ssh directory.

2)  Add public key to ~/.ssh/authorized_keys file on remote host

Simple way to do this is, using ssh-copy-id command.

$ ssh-copy-id -i ~/.ssh/id_rsa.pub linuxtechi@

In above command:

  • -i option indicates identity file
  • ~/.ssh/id_rsa.pub is identity file
  • remaining text is remote user and remote server IP

NOTE: Never share your private key with anyone.

3) That’s it. Isn’t it so simple? Now we can execute command over SSH without entering password. Let us verify this.

$ ssh linuxtechi@ uname
Limitation of public-private key authentication

Thought public-private key authentication makes our life easier, it is not perfect. Its major downside is; we cannot automate it, because user interaction is required first time. Remember !!! we have provided password to ssh-copy-id command.

There is no need to get panic, this is not end of world. In next section we’ll discuss approach which eliminates this limitation.

sshpass utility

To overcome above limitation, we can use sshpass utility. It provides non-interactive way to authenticate SSH session. This section discusses various ways of it.

Installation of sshpass

sshpass utility is part of Ubuntu’s official repository. We can install it using following commands:

$ sudo apt-get update
$ sudo apt-get install sshpass


sshpass can accept password – as an argument, read it from file or via environment variable. Let us discuss all these approaches.

1) Password as an argument

We can provide, password as an argument using –p option:

$ sshpass -p 'secrete-password' ssh linuxtechi@ uname

2) Password from file

sshpass can read password from regular file using -f option:

$ echo "secrete-password" > password-file
$ sshpass -f password-file ssh linuxtechi@ uname

3) Password from environment variable

In addition to this, we can provide password from environment variable using -e option:

$ export SSHPASS="secrete-password"
$ sshpass -e ssh linuxtechi@ uname

This tutorial shows various tricks and tips on remote command execution over SSH. Once you get the understanding of these tricks it will make your life much easier and definitely improve your productivity.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

NFS (Network File System) is the most widely server to provide files over network. With NFS server we can share folders over the network and allowed clients or system can access those shared folders and can use them in their applications. When it comes to the production environment then we should configure nfs server in high availability to rule out the single point of failure.

In this article we will discuss how we can configure nfs server high availability clustering(active-passive) with pacemaker on CentOS 7 or RHEL 7

Following are my lab details that I have used for this article,

  • NFS Server 1 (nfs1.example.com) – – Minimal CentOS 7 / RHEL 7
  • NFS Server 2 (nfs2.example.com) – – Minimal CentOS 7 / RHEL 7
  • NFS Server VIP –
  • Firewall enabled
  • SELinux enabled

Refer the below steps to configure NFS Server active-passive clustering on CentOS 7 / RHEL 7

Step:1 Set Host name on both nfs servers and update /etc/hosts file

Login to both nfs servers and set the hostname as “nfs1.example.com” and “nfs2.example.com” respectively using hostnamectl command, Example is shown below

~]# hostnamectl set-hostname "nfs1.example.com"
~]# exec bash

Update the /etc/hosts file on both nfs servers,  nfs1.example.com  nfs2.example.com
Step:2 Update both nfs servers and install pcs packages

Use below ‘yum update’ command to apply all the updates on both nfs servers and then reboot once.

~]# yum update && reboot

Install pcs and fence-agent packages on both nfs servers,

[root@nfs1 ~]# yum install -y pcs fence-agents-all
[root@nfs2 ~]# yum install -y pcs fence-agents-all

Once the pcs and fencing agents’s packages are installed then allow pcs related ports in OS firewall from both the nfs servers,

~]# firewall-cmd --permanent --add-service=high-availability
~]# firewall-cmd --reload

Now Start and enable pcsd service on both nfs nodes using beneath commands,

~]# systemctl enable pcsd
~]# systemctl start  pcsd
Step:3 Authenticate nfs nodes and form a cluster

Set the password to hacluster user, pcsd service will use this user to get the cluster nodes authenticated, so let’s first set the password to hacluster user on both the nodes,

[root@nfs1 ~]# echo "enter_password" | passwd --stdin hacluster
[root@nfs2 ~]# echo "enter_password" | passwd --stdin hacluster

Now authenticate the Cluster nodes, In our case nfs2.example.com will be authenticated on nfs1.example.com, run the below pcs cluster command on “nfs1”

[root@nfs1 ~]# pcs cluster auth nfs1.example.com nfs2.example.com
Username: hacluster
nfs1.example.com: Authorized
nfs2.example.com: Authorized
[root@nfs1 ~]#

Now its time to form a cluster with the name “nfs_cluster” and add both nfs nodes to it. Run below “pcs cluster setup” command from any nfs node,

[root@nfs1 ~]# pcs cluster setup --start --name nfs_cluster nfs1.example.com nfs2.example.com

Enable pcs cluster service on both the nodes so that nodes will join the cluster automatically after reboot. Execute below command from either of nfs node,

[root@nfs1 ~]# pcs cluster enable --all
nfs1.example.com: Cluster Enabled
nfs2.example.com: Cluster Enabled
[root@nfs1 ~]#
Step:4 Define Fencing device for each cluster node.

Fencing is the most important part of a cluster, if any of the node goes faulty then fencing device will remove that node from the cluster. In Pacemaker fencing is defined using Stonith (Shoot The Other Node In The Head) resource.

In this tutorial we are using a shared disk of size 1 GB (/dev/sdc) as a fencing device. Let’s first find out the id of /dev/sdc disk

[root@nfs1 ~]# ls -l /dev/disk/by-id/

Note down the id of disk /dev/sdc as we will it in “pcs stonith” command.

Now run below “pcs stonith” command from either of the node to create fencing device(disk_fencing)

[root@nfs1 ~]# pcs stonith create disk_fencing fence_scsi pcmk_host_list="nfs1.example.com nfs2.example.com" pcmk_monitor_action="metadata" pcmk_reboot_action="off" devices="/dev/disk/by-id/wwn-0x6001405e49919dad5824dc2af5fb3ca0" meta provides="unfencing"
[root@nfs1 ~]#

Verify the status of stonith using below command,

[root@nfs1 ~]# pcs stonith show
 disk_fencing   (stonith:fence_scsi):   Started nfs1.example.com
[root@nfs1 ~]#

Run “pcs status” command to view status of cluster

[root@nfs1 ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: nfs2.example.com (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum
Last updated: Sun Mar  4 03:18:47 2018
Last change: Sun Mar  4 03:16:09 2018 by root via cibadmin on nfs1.example.com

2 nodes configured
1 resource configured
Online: [ nfs1.example.com nfs2.example.com ]
Full list of resources:
 disk_fencing   (stonith:fence_scsi):   Started nfs1.example.com
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@nfs1 ~]#

Note: If your cluster nodes are the Virtual machines and hosted on VMware then you can use “fence_vmware_soap” fencing agent. To configure “fence_vmware_soap” as fencing agent, refer the below logical steps:

1) Verify whether your cluster nodes can reach to VMware hypervisor or Vcenter

# fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> --ssl -z -v -o list |egrep "(nfs1.example.com|nfs2.example.com)"
# fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> --ssl -z -o list |egrep "(nfs1.example.com|nfs2.example.com)"

if you are able to see the VM names in the output then it is fine, otherwise you need to check why cluster nodes not able to make connection esxi or vcenter.

2) Define the fencing device using below command,

# pcs stonith create vmware_fence fence_vmware_soap pcmk_host_map="node1:nfs1.example.com;node2:nfs2.example.com" ipaddr=<vCenter_IP_address> ssl=1 login=<user_name> passwd=<password>

3) check the stonith status using below command,

# pcs stonith show
Step:5 Install nfs and format nfs shared disk

Install ‘nfs-utils’ package on both nfs servers

[root@nfs1 ~]# yum install nfs-utils -y
[root@nfs2 ~]# yum install nfs-utils -y

Stop and disable local “nfs-lock” service on both nodes as this service will be controlled by pacemaker

[root@nfs1 ~]# systemctl stop nfs-lock &&  systemctl disable nfs-lock
[root@nfs2 ~]# systemctl stop nfs-lock &&  systemctl disable nfs-lock

Let’s assume we have a shared disk “/dev/sdb” of size 10 GB between two cluster nodes, Create partition on it and format it as xfs file system

[root@nfs1 ~]# fdisk /dev/sdb

Run the partprobe command on both nodes and reboot once.

~]# partprobe

Now format “/dev/sdb1” as xfs file system

[root@nfs1 ~]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=655296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2621184, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@nfs1 ~]#

Create mount point for this file system on both the nodes,

[root@nfs1 ~]# mkdir /nfsshare
[root@nfs2 ~]# mkdir /nfsshare
Step:6 Configure all required NFS resources on Cluster Nodes

Followings are the required NFS resources:

  • Filesystem resource
  • nfsserver resource
  • exportfs resource
  • IPaddr2 floating IP address resource

For Filesystem resource, we need a shared storage among the cluster nodes, we have already created partition on the shared disk (/dev/sdb1) in above steps, so we will use that partition. Use below “pcs resource create” command to define Filesystem resource from any of the node,

[root@nfs1 ~]# pcs resource create nfsshare Filesystem device=/dev/sdb1  directory=/nfsshare fstype=xfs --group nfsgrp
[root@nfs1 ~]#

In above command we have defined NFS filesystem as “nfsshare” under the group “nfsgrp“. Now onwards all nfs resources will created under the group nfsgrp.

Create nfsserver resource with name ‘nfsd‘ using the below command,

[root@nfs1 ~]# pcs resource create nfsd nfsserver nfs_shared_infodir=/nfsshare/nfsinfo --group nfsgrp
[root@nfs1 ~]#

Create exportfs resource with the name “nfsroot

[root@nfs1 ~]#  pcs resource create nfsroot exportfs clientspec="" options=rw,sync,no_root_squash directory=/nfsshare fsid=0 --group nfsgrp
[root@nfs1 ~]#

In the above command, clientspec indicates the allowed clients which can access the nfsshare

Create NFS IPaddr2 resource using below command,

[root@nfs1 ~]# pcs resource create nfsip IPaddr2 ip= cidr_netmask=24 --group nfsgrp
[root@nfs1 ~]#

Now view and verify the cluster using pcs status

[root@nfs1 ~]# pcs status

Once you are done with NFS resources then allow nfs server ports in OS firewall from both nfs servers,

~]# firewall-cmd --permanent --add-service=nfs
~]#  firewall-cmd --permanent --add-service=mountd
~]#  firewall-cmd --permanent --add-service=rpc-bind
~]#  firewall-cmd --reload
Step:7 Try Mounting NFS share on Clients

Now try mounting the nfs share using mount command, example is shown below

[root@localhost ~]# mkdir /mnt/nfsshare
[root@localhost ~]# mount /mnt/nfsshare/
[root@localhost ~]# df -Th /mnt/nfsshare
Filesystem     Type  Size  Used Avail Use% Mounted on nfs4   10G   32M   10G   1% /mnt/nfsshare
[root@localhost ~]#
[root@localhost ~]# cd /mnt/nfsshare/
[root@localhost nfsshare]# ls
[root@localhost nfsshare]#

For Cluster testing, stop the cluster service on any of the node and see whether nfsshare is accessible or not. Let’s assume I am going stop cluster service on “nfs1.example.com”

[root@nfs1 ~]# pcs cluster stop
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...
[root@nfs1 ~]#

Now go to client machine and see whether nfsshare is still accessible, In my case I am still able to access it and able to create files on it.

[root@localhost nfsshare]# touch test
[root@localhost nfsshare]#

Now enable the cluster service on “nfs1.example.com” using below command,

[root@nfs1 ~]# pcs cluster start
Starting Cluster...
[root@nfs1 ~]#

That’s all from this article, it confirms that we have successfully configured NFS active-passive clustering using pacemaker. Please do share your feedback and comments in the comments section below.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


sudo stands for superuser do. It allows authorized users to execute command as an another user. Another user can be regular user or superuser. However, most of the time we use it to execute command with elevated privileges.

sudo command works in conjunction with security policies, default security policy is sudoers and it is configurable via /etc/sudoers file. Its security policies are highly extendable. One can develop and distribute their own policies as plugins.

How it’s different than su

In GNU/Linux there are two ways to run command with elevated privileges:

  • Using su command
  • Using sudo command

su stands for switch user. Using su, we can switch to root user and execute command. But there are few drawbacks with this approach.

  • We need to share root password with another user.
  • We cannot give controlled access as root user is superuser
  • We cannot audit what user is doing.

sudo addresses these problems in unique way.

  1. First of all, we don’t need to compromise root user password. Regular user uses its own password to execute command with elevated privileges.
  2. We can control access of sudo user meaning we can restrict user to execute only certain commands.
  3. In addition to this all activities of sudo user are logged hence we can always audit what actions were done. On Debian based GNU/Linux all activities are logged in /var/log/auth.log file.

Later sections of this tutorial sheds light on these points.

Hands on with sudo

Now, we have fair understanding about sudo. Let us get our hands dirty with practical. For demonstration, I am using Ubuntu. However, behavior with another distribution should be identical.

Allow sudo access

Let us add regular user as a sudo user. In my case user’s name is linuxtechi

1) Edit /etc/sudoers file as follows:

$ sudo visudo

2) Add below line to allow sudo access to user linuxtechi:

linuxtechi ALL=(ALL) ALL

In above command:

  • linuxtechi indicates user name
  • First ALL instructs to permit sudo access from any terminal/machine
  • Second (ALL) instructs sudo command to be allowed to execute as any user
  • Third ALL indicates all command can be executed as root
Execute command with elevated privileges

To execute command with elevated privileges, just prepend sudo word to command as follows:

$ sudo cat /etc/passwd

When you execute this command, it will ask linuxtechi’s password and not root user password.

Execute command as an another user

In addition to this we can use sudo to execute command as another user. For instance, in below command, user linuxtechi executes command as a devesh user:

$ sudo -u devesh whoami
[sudo] password for linuxtechi:
Built in command behavior

One of the limitation of sudo is – Shell’s built in command doesn’t work with it. For instance, history is built in command, if you try to execute this command with sudo then command not found error will be reported as follows:

$ sudo history
[sudo] password for linuxtechi:
sudo: history: command not found

Access root shell

To overcome above problem, we can get access to root shell and execute any command from there including Shell’s built in.

To access root shell, execute below command:

$ sudo bash

After executing this command – you will observe that prompt sign changes to pound (#) character.


In this section we’ll discuss some useful recipes which will help you to improve productivity. Most of the commands can be used to complete day-to-day task.

Execute previous command as a sudo user

Let us suppose you want to execute previous command with elevated privileges, then below trick will be useful:

$ sudo !4

Above command will execute 4th command from history with elevated privileges.

sudo command with Vim

Many times we edit system’s configuration files and while saving we realize that we need root access to do this. Because this we may lose our changes. There is no need to get panic, we can use below command in Vim to rescue from this situation:

:w !sudo tee %

In above command:

  • Colon (:) indicates we are in Vim’s ex mode
  • Exclamation (!) mark indicates that we are running shell command
  • sudo and tee are the shell commands
  • Percentage (%) sign indicates all lines from current line
Execute multiple commands using sudo

So far we have executed only single command with sudo but we can execute multiple commands with it. Just separate commands using semicolon (;) as follows:

$ sudo -- bash -c 'pwd; hostname; whoami'

In above command:

  • Double hyphen (–) stops processing of command line switches
  • bash indicates shell name to be used for execution
  • Commands to be executed are followed by –c option
Run sudo command without password

When sudo command is executed first time then it will prompt for password and by default password will be cached for next 15 minutes. However, we can override this behavior and disable password authentication using NOPASSWD keyword as follows:

linuxtechi ALL=(ALL) NOPASSWD: ALL
Restrict user to execute certain commands

To provide controlled access we can restrict sudo user to execute only certain commands. For instance, below line allows execution of echo and ls commands only

linuxtechi ALL=(ALL) NOPASSWD: /bin/echo /bin/ls
Insights about sudo

Let us dig more about sudo command to get insights about it.

$ ls -l /usr/bin/sudo
-rwsr-xr-x 1 root root 145040 Jun 13  2017 /usr/bin/sudo

If you observe file permissions carefully, setuid bit is enabled on sudo. When any user runs this binary it will run with the privileges of the user that owns the file. In this case it is root user.

To demonstrate this, we can use id command with it as follows:

$ id
uid=1002(linuxtechi) gid=1002(linuxtechi) groups=1002(linuxtechi)

When we execute id command without sudo then id of user linuxtechi will be displayed.

$ sudo id
uid=0(root) gid=0(root) groups=0(root)

But if we execute id command with sudo then id of root user will be displayed.


Takeaway from this article is – sudo provides more controlled access to regular users. Using these techniques multiple users can interact with GNU/Linux in secure manner.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When it comes to data safety and security, not only large companies, but also personal computer owners also need to have good backup and recovery software that protects their data from losing their precious data. To overcome these issue, there are a lot of open source backup software tools that are available that can help protect your data. And nowadays, computer desktops come with huge storage capacity and in turn it means lots of data being stored. This also leads to a huge risk of losing all the data if the system gets crashed and it may take several days and weeks to recover and repair the damage cause. Hence it is increasingly important to have a proper recovery solution with you all the time.

In this article we are going to review some of the top 12 open source back up tools for Linux systems:

1) Bacula

When it comes to open sources backup tools for linux systems, Bacula is one of the most widely used and popular backup and recovery solution for linux system. It also helps in verifying the data across different computer networked systems effectively. Bacula comes with an effective and advanced storage management solution that helps you to recover all lost and damaged files pretty much quickly when compared to other backup and recovery solutions. It is the complete backup solution that is needed for a small or even a large enterprise to maintain and secure their data. Bacula comes with two versions, the Basic and Enterprise version. The basic version comes with all the basic features needed in a backup and recovery solution and the enterprise version comes with a lot of advanced features including Metal backup, cloud back and also backup solutions for VMs.

2) Duplicati

Duplicati is another popular linux open source backup solution that is available completely free even for commercial usage. It is designed to run in various operating systems including Linux, Windows and MacOS. With Duplicati, you can easily take online backups and comes with a pause/resume feature to pause the backup process during any network issues and will automatically resume backup once the issue is rectified and the process continues from where it stopped. Duplicati also conducts regular checks on the backups to detect for any broken/corrupt backup. All backups are provided with an AES-256 encryption and all backups are compressed and stored on the servers.

3) rsnapshot

rsnapshot is a great filesystem snapshot tool that is capable of taking incremental backups in both local and remote filesystems. It is an rsync based backup system that can be used to take backup for any number of machines in the network. Since rsync is cleverly designed to use hard links, the disk space in your system is effectively used. By hardinks, it means even though it looks like taking a complete backup everytime, it only takes a full backup and then only the differences are backed up to save more space. With ssh, rsync can be used to take snapshots of remote systems as well.

4) Amanda

Advanced Maryland Automatic Network Disk Archiver also called as Amanda is another great open source backup and recovery software for linux systems. It is basically an enterprise grade backup solution and according to the company, Amanda is running on around a million servers and desktops worldwide across various operating systems including Linux, Windows, UNIX, MacOS, BSD etc., Amanda comes in three different editions including Community edition, Enterprise edition and Zmanda Backup appliance. The community edition is freely available for download whereas the enterprise edition supports live application backups and databases. The Zmanda backup appliance is a virtual machine that is capable of backing up an entire network easily.

5) rsync (Command line tool)

Rsync is another popular linux open source tool that helps in quick incremental file transfers. Rsync can help you to transfer files to local host to remote host and vice-versa. The remote-update protocol enables the speedier file transfers as it checks if the destination file already exists and stops copying the file again. The delta-transfer algorithm also enables it to sync remote files easily and quickly as it doesn’t send the overall file, but only the differences are only sent and hence the sync is pretty much quick.

6) BackupPC

BackupPC is another enterprise range open source backup tool that supports all major operating systems including Windows, Linux and Mac. It is also a high performance backup system that utilizes compression and pooling setup that greatly reduces the disk storage along with disk I/O.

7) Rear (Relax & Recover)

Relax and Recover also called as Rear is largely kind of a setup and forget utility as you don’t need to do anything after installing the utility in your system as it takes care of backing up and restoring files when needed automatically. The design is completely modular and easy to use and supports various boot media types including USB, eSATA, PXE, ISO and OBDR etc. It also supports both tar and rsync internal backup methods.

Read More on : How to install and use ReaR (Migration & Recovery tool) on CentOS 7 / RHEL 7

8) Clonezilla

Clonezilla is a great backup and cloning utility that can help you clone or take an image of an entire partition or a disk easily. It is available in two versions including the Clonezilla Live edition and Clonezilla SE edition. It is a complete free and open source backup utility and the Live edition is ideal for taking backup and restoring of single systems whereas the SE or the server edition is perfect to take huge server backups. One of the important features of this cloning utility is that it only used blocks and restored in the hard disk and thus enhancing the cloning efficiency.

9) Back in Time

Back in Time is an rsync based backup utility that comes with an interesting feature of using hard links during the backup process. Since hard links are used in this backup utility, it is easy to remove old snapshots by looking at the snapshots at regular intervals. Back in Time also doesn’t allow compression.

10) Bareos – Open Source Data Protection

Bareos stands for “Backup Archiving Recovery Open Sourced” is an open source backup software that can be used to take efficient backup and restoring of computer and can take backups in different media’s including tape backups and disk backups. It enables the system administrator to easily manage all kinds of backup, restore and data verification of computers of an entire network.

11) Box Backup

Box Backup is the next open source backup utility in our list that only copies or backups the files into the disk only. It doesn’t support other kind of backups including tape or disk backup. It comes with strong encryption features and also uses only minimum bandwidth. The backup utility is totally automatic and fully secure.

12) sbackup (Simple Backup Suite)

sbackup or Simple Backup solution is that is primary developed as a desktop backup utility. Sbackup can be used to take frequent backups of your files and directories and it also utilizes regular expressions to exclude files that are already copied. It supports compressed archives and hence it can be used for taking backup of huge data. Even though it is popular among users for its predefined backup solutions, it can also be used for manual, scheduled and custom backups. But sbackup only comes with a backup solution and doesn’t have any restore feature.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As of we know VLC is the most widely used media player on Linux and Windows Desktop. VideoLan project has recently released its latest & stable version of VLC 3. Some of the new features of VLC 3 are listed below:

  • It supports 3D audio and 360 Video
  • VLC 3 supports hardware decoding by default and can play 4K and 8K videos
  • It can stream local media to Chromecast devices.
  • It supports HDMI pass through for Audio HD codecs (E-AC3 & TrueHD)
  • It supports network browsing for network file systems like SMB, NFS, FTP & SFTP etc.

To read more on new features on VLC 3, you can refer its official site:


In this article we will discuss how we can install VLC 3 on Debian 9 and Ubuntu 16.04 / 17.10

Installation of VLC 3 on Debian 9

VLC 3 package is not available in the default Debian 9 package repositories, so first we have to configure additional debian repositories. Add the following lines in “/etc/apt/sources.list

deb http://deb.debian.org/debian/ unstable main contrib non-free
deb-src http://deb.debian.org/debian/ unstable main contrib non-free

Update the repositories and install vlc package using below commands,

linuxtechi@nixhome:~$ sudo apt update
linuxtechi@nixhome:~$ sudo apt install vlc -y

Note: It is recommended to disable above repositories once you have installed VLC 3 because the packages from these repositories can make your system unstable.

Installation of VLC 3 via snap

Alternate Way to Install VLC 3 on Debian 9 is via snap. Run the below command,

linuxtechi@nixhome:~$ sudo apt install snapd

To view the VLC version available via snap

linuxtechi@nixhome:~$ sudo snap find vlc
Name                        Version  Developer  Notes  Summary
vlc                         3.0.0    videolan   -      The ultimate media player
mjpg-streamer               2.0      ogra       -      UVC webcam streaming tool
simplescreenrecorder-mardy  0.3.8-3  mardy      -      Simple Screen Recorder

Run below snap command to install VLC 3

linuxtechi@nixhome:~$ sudo snap install vlc
vlc 3.0.0 from 'videolan' installed

Once VLC has been installed via snap, reboot your system once and then you can start accessing the latest version of VLC.

Access and Start VLC 3,

Click on VLC media player,

Installation of VLC 3 on Ubuntu 16.04 / 17.10

VLC 2.2 is available in the default Ubuntu 16.04 / 17.10 repositories but we can install VLC 3 on Ubuntu 16.04 & 17.10 using snap package.

Refer the below commands to Install VLC 3 using snap

pkumar@linuxbox:~$ sudo apt install snap -y
pkumar@linuxbox:~$ sudo snap install vlc

Reboot your machine once vlc has been installed successfully.

Access VLC 3 media player,

Click on VLC Media player

If you don’t like snap VLC 3 and wants to remove from your system , then use the below command

$ sudo snap remove vlc

That’s all from this article. Please do share your valuable feedback and comments in the comments section below.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

iostat command is used to monitor CPU utilization and I/O (input /output) statistics of all the disks and file systems. nfsiostat command is used to monitor i/o statistics of network file system(NFS).

iostat command monitor the I/O (Input/Output) devices loading by means of observing the time, devices are active with respect to their average transfer rates. This command is especially helpful for generating reports that we can use to optimize the system’s input & output load.

iostat command generally generates two reports:

  • CPU utilization report
  • All disks i/o statistics report

To generate the reports, iostat command reads some of the system files . These files are,

  • /proc/diskstats for disk stats
  • /proc/stat for system stats
  • /sys for block device stats
  • /proc/devices for persistent device names
  • /proc/self/mountstats for all  the network filesystems
  • /proc/uptime for information regarding system uptime

In this tutorial, we will learn how to install iostat utility on Linux systems and then we will discuss how to generate reports using iostat command,

Iostat Installation on Linux Systems:

iostat is a part of ‘sysstat‘ package, we can install it on our system using the following command,


[root@linuxtechi ~]# yum install sysstat -y


$ sudo apt-get install sysstat -y


[root@linuxtechi ~]# dnf install sysstat -y

Now let’s check out some examples to understand the iostat command better.

Example:1 Get complete statistics (CPU & Devices)

To get the complete statistics for the system, open terminal & execute the following command,

[root@linuxtechi ~]# iostat

This will produce the following output on the screen,

Here in the iostat command output,

  • %user, is CPU utilization for the user,
  • %nice, is the CPU utilization for apps with nice priority,
  • %system, is the CPU being utilized by the system,
  • %iowait, is the time percentage during which CPU was idle but there was an outstanding i/o request,
  • %steal, percentage of time CPU was waiting as the hypervisor was working on another CPU,
  • %idle, is the percentage of time system was idle with no outstanding request.

Devices, shows the name of all the devices on system,

  • Tps, is the short for transfer per second,
  • Blk_read/s & Blk_write/s are the transfer speed for read and write operations,
  • Blk_read & Blk_write shows the total number of blocks read & written.
Example:2 Generate only CPU stats

To only generate the CPU statistics for the system, we will use options ‘c’ with iostat. Run the following command from terminal,

[root@linuxtechi ~]# iostat -c

Example:3 To Generate i/o statistics for all the devices (-d option) 

To get the iostat report only for the devices connected on the system, we will use option ‘d’ with iostat command,

[root@linuxtechi ~]# iostat -d

Example:4 Generate detailed i/o statistics

Though usually the stats provide by iostat command are sufficient but if you wish to get even more detailed statistics, we can use ‘-x’ option along with iostat command. Example is shown below,

[root@linuxtechi ~]# iostat -x

Example:5 Generate detailed reports for devices & CPU separately

To get the detailed information regarding the devices on the system, we will use option ‘d’ along with option ‘x’,

[root@linuxtechi ~]# iostat -xd

Similarly, for generating the detailed information for CPU, we will use options ‘c’ & ‘x’,

[root@linuxtechi ~]# iostat -xc
Example:6 Getting i/o statistics for a single device

iostat can also provide the i/o statistics for a single device. To get the statistics of a device, execute iostat command along with option ‘p’ followed by device name,

[root@linuxtechi ~]# iostat -p sda

Example:7 Generate reports in either MB or KB

We can also generate the system statistics in either Megabytes or kilobytes units. To generate the reports in mb, we will use option ‘m’ with iostat command,

[root@linuxtechi ~]# iostat -m

Similarly, we can also generate the reports in kb unit format with option ‘k’,

[root@linuxtechi ~]# iostat -k
Example:8 Generating system i/o statistics report with delay

To capture the system statistics with a delay, we can mention the iostat command followed by interval in seconds & number of reports required,

[root@linuxtechi ~]# iostat 3 2

In this example, we are capturing 2 reports at 3 seconds interval,

We can also use the delay parameter along with other options of iostat command that we discussed above.

Example:9 Generate the LVM statistics report

To generate the LVM statistics, we can use option ‘N’ with iostat command,

[root@linuxtechi ~]# iostat -N

Example:10 Generate the reports for only active devices

We can also generate reports for devices that are active & omit out devices from reports that are inactive for sample period. We will use option ‘z’ with iostat command to accomplish this,

[root@linuxtechi ~]# iostat -z 2 5

Example:11 Generate iostat reports with timestamp

To generate the iostat reports with a timestamp, we will use option ‘t’ along with iostat command,

[root@linuxtechi ~]# iostat -t

Example:12 Generate statistics report based on persistent device name

To get the report based on the persistent name of the device, we will use the option ‘j’ followed by keyword ‘ID’ & device persistent name,

Use blkid command to find the UUID of the disk.

Once you find the UUID / ID then use the below iostat command,

[root@linuxtechi ~]# iostat -j id 12244367-e751-4c1c-9336-f30d623fceb8

Example:13 Generate i/o statistics for Network File System(NFS)

We can use nfsiostat command to generate the NFS i/o statistics reports. nfsiostat command is the part of the package ‘nfs-utils’. Let’s assume we have mounted two NFS shares on our server, so to generate the statistics report for NFS share run the below command,

[root@linuxtechi ~]# nfsiostat

Example:14 Generate System I/O statistics report over a period of time

iostat command generate the live i/o statistics of your system and if you want to view the statistics reports over a period of time (back date system i/o statistics) then we should use sar utility. Sar command is also provided by the package ‘sysstat’

Read More on “Generate CPU, Memory and I/O report using SAR command

That’s it guys, we have covered all the options/parameters that can be used with iostat command. You can try mixing these options to get more desired/detailed results. Please do mention any query or question that you have regarding the tutorial.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As discussed in our earlier article, Icinga2 is an open source tool for monitoring the IT resources. We have already covered the installation of icinga2 on CentOS / RHEL 7 machines &  in this tutorial, we are going to learn to add Windows & Linux machine to Icinga2 for monitoring.

Read More on – How to Install and Configure Icinga 2 / Icinga Web 2 on CentOS 7 and RHEL 7

The default port that icinga2 uses for monitoring is 5665 & it should be opened up in firewall to maintain a connection between master & host (called parent & child for icinga2), Use below command to open 5665 port in os firewall,

[root@icinga ~]# firewall-cmd --permanent --add-port=5665/tcp
[root@icinga ~]# firewall-cmd --reload

Configuring Master server (Icinga 2 Server)

Firstly we need to prepare the master server to connect to host systems. We will run ‘icinga 2 setup wizard’ for the same, run the following command from the terminal,

[linuxtechi@icinga2 ~]$ sudo icinga2 node wizard

First it will prompt to specify if its master or client setup, you need to press ‘n’ to install master setup. Rest options can be kept default or can be changed as per your needs,

Now restart the icinga2 service to implement the changes,

[linuxtechi@icinga ~]$ sudo service icinga2 restart

Now we also need to generate a ticket for our host, run the following command from terminal to generate a ticket for your host,

[linuxtechi@icinga ~]$ sudo icinga2 pki ticket --cn lnxclient.example.com
[linuxtechi@icinga ~]$

Here “lnxclient.example.com”  is the name of the linux host that we want to add to Icinga for monitoring   & “9e26a5966cd6e2d6593448214cab8d5e7bd61d59” is the generated ticket, which we will need later.

Add & configure Linux machine ( RHEL / CentOS 7) to Icinga 2

To configure the host, we wil first install the icinga2 packages (same as we did on master server).

[root@lnxclient ~]# yum install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm

Now we will install the icinga 2,

[root@lnxclient ~]# yum install icinga2 -y

Once packages have been installed, start the icinga2 service & enable it for boot,

[root@lnxclient ~]# systemctl start icinga2 && systemctl enable icinga2

Now start the  icinga setup wizard from host,

[root@lnxclient ~]# icinga2 node wizard

Here we need to press ‘y’ on the first prompt i.e. ‘Specify if its satellite/client setup’, after that we need to press ‘y’ again for prompt ‘Do you want to establish a connection to parent node from this node’  &  then also provide the master server information. Also we will be asked to enter the ticket which we created earlier, specify that as well.

After that’s done, restart the icinga2 service to implement the changes,

[root@lnxclient ~]# systemctl restart icinga2

We will now update the host node information on master, switch back to the master & we will define host or client (lnxclient.example.com) in “/etc/icinga2/conf.d/hosts.conf” file.

Add the following content at the end of file.

[linuxtechi@icinga ~]$ sudo -i
[root@icinga ~]# vi /etc/icinga2/conf.d/hosts.conf

object Zone "lnxclient.example.com" {
  endpoints = [ "lnxclient.example.com" ]
  parent = "icinga.example.com"
object Endpoint "lnxclient.example.com" {
  host = ""
object Host "lnxclient.example.com" {
  import "generic-host"
  address = ""
  vars.http_vhosts["http"] = {
    http_uri = "/"
    vars.disks["disk"] = {
  vars.disks["disk /"] = {
    disk_partitions = "/"
  vars.notification["mail"] = {
    groups = [ "icingaadmins" ]
  vars.client_endpoint = "lnxclient.example.com"

Save and exit the file

Change the host, master name & IP address as per your setup. Now restart the icinga2 service & we can visit ‘icingaweb2’ page to start monitoring the host services,

[root@icinga ~]# systemctl restart icinga2

Note:- For more information in configuring more services, read the official Icinga2 documentation at (https://www.icinga.com/docs/icinga2/latest/doc/04-configuring-icinga-2/)

Now open the Icinga web 2 portal with the following URL,

& provide your credentials.  Inside the dashboard, go to ‘Overview‘ then ‘Hosts‘ to check the hosts that are being monitored by icinga2. As seen in the below screenshot, our server is monitoring localhost or icinga.example.com  & the host node we just added,

Also we can see all the services from main dashboard,

We will now add a Windows host on icinga2 server for monitoring.

Add & Configure a Windows host (Windows Server 2012 R2) to Icinga 2

Adding a windows host to icinga2 is also pretty easy & straight forward. For Windows system, we need to download a MSI installer from the official website based on your system (http://packages.icinga.com/windows/).

Once downloaded, run the installer & complete the initial installation by just pressing next. Once the installation has been complete, Run the setup wizard & we will get the following screen,

Here mention a hostname that you want your Windows system to be identified with & then create a ticket with that hostname from the icinga server as we did for Linux system, with the following command,

[root@icinga ~]# icinga2 pki ticket --cn fileserver
[root@icinga ~]#

Now click on ‘Add’ to add the information , we will then be asked to enter our Icinga2 server name & IP address,

After it has been added, check the following boxes ‘Listen for connection from master/satellite instance(s) ’, ‘Accept commands from master/satellite instance(s)’ & also ‘Accept config updates from master/satellite instance(s)’ & press ‘Next’ to continue the setup.

We will then be asked to verify the information, check & press ‘Next‘ to continue,

Configuration will then complete, click ‘Finish‘ to exit out of setup

Now go back to master server, & we  will now update the windows host node information on master & add the following content at the end of  file  “/etc/icinga2/conf.d/hosts.conf

[root@icinga ~]# vi /etc/icinga2/conf.d/hosts.conf
object Zone "fileserver" {
  endpoints = [ "fileserver" ]
  parent = "icinga.example.com"
object Endpoint "fileserver" {
  host = ""
object Host "fileserver" {
  import "generic-host"
  address = ""
  vars.http_vhosts["http"] = {
    http_uri = "/"
    vars.disks["disk"] = {
  vars.disks["disk /"] = {
    disk_partitions = "/"
  vars.notification["mail"] = {
    groups = [ "icingaadmins" ]
  vars.client_endpoint = "fileserver"

Save and exit the file and then restart icinga2 service

[root@icinga ~]# systemctl restart icinga2

Now open the icinga web 2 portal with the following URL,

Now go to ‘Overview’ then ‘Hosts’ to check the hosts that are being monitored by icinga 2.

With this we end our tutorial. Please feel free to send in your questions & queries.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

SuperTux 2 is an exciting 2D jump and run game that is pretty much similar to the ever popular series of Super Mario Games. It is a free and open-source linux game that was originally conceived and developed by Bill Kendrick and currently being handled by the SuperTux Development Team.  In the Super Tux 2 game, you play as Tux the Penguin that takes on a dangerous journey through thick and dense forests and collecting coins and power ups on the way and also fighting with all kinds of enemies. Simply stay alive and advance through each level.

 Install SuperTux 2 on Ubuntu 16.04

SuperTux 2 is made available in the repository itself and you need to run the following command to install this exciting game on your Ubuntu 16.04 system

linuxtechi@nixworld:~$ sudo apt-get update
linuxtechi@nixworld:~$ sudo apt-get install supertux -y

Above command will automatically install SuperTux 2.

Access & Start SuperTux 2 Game

It can be started either by clicking the SuperTux icon or by typing the command “supertux2” in the terminal window.

linuxtechi@nixworld:~$ supertux2


Click on SuperTux 2 icon

Install SuperTux 2 on Debian 9 System

superTux 2 game’s debian package is available in the default Debian 9 repositories. To install this install this game, open the terminal and run the beneath commands

linuxtechi@nixhome:~$ sudo apt-get update
linuxtechi@nixhome:~$ sudo apt-get install supertux -y

Once SuperTux package is installed successfully, we can access and start playing this game either from Command Linux or GUI.

From the terminal type the below command,

linuxtechi@nixhome:~$ supertux2

From GUI, type supertux2 in the search box , example is shown below,

Click on SuperTux 2 icon

Click on the Start Game to play the game. Enjoy & have Fun.

Install SuperTux 2 on Linux Mint 18.03

Installing Super Tux2 on Linux Mint is pretty much simple as supertux Debian is available in the linux mint 18.03 repositories. So you just need to run below set of commands to install this exciting 2D game on your system.

pradeep@mintnix ~ $ sudo apt-get update
pradeep@mintnix ~ $ sudo apt-get install supertux

Once the supertux is installed successfully, we can access and start this classic game either from terminal or GUI.

Accessing SuperTux 2 from the terminal,

pradeep@mintnix ~ $ supertux2

Accessing SuperTux 2 from GUI,

Click in SuperTux 2 icon

To play this game, click on “Start Game“. Enjoy and explore this classic Game.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome back LinuxTechi users, with the continuation of our openstack deployment with Tripleo approach. In this tutorial we will discuss the steps how we can deploy tripleo overcloud Servers (Controller and Compute) via undercloud on CentOS 7 VMs hosted in KVM hypervisor.

In our last article we have already discussed our lab setup details and installation of tripleo Undercloud on CentOS 7, for undercloud installation steps refer:

I am assuming undercloud is already installed and configured. Let’s start overcloud deployment steps.

Step:1 Download and Import Overcloud images

Login to the undercloud server as stack user and download the overcloud images from the below url, in my case I am using latest version of openstack( i.e pike), you can download the images that suits to your environment and openstack version,


[stack@undercloud ~]$ sudo wget https://images.rdoproject.org/pike/delorean/current-tripleo-rdo/overcloud-full.tar --no-check-certificate
[stack@undercloud ~]$ sudo wget https://images.rdoproject.org/pike/delorean/current-tripleo-rdo/ironic-python-agent.tar --no-check-certificate
[stack@undercloud ~]$ mkdir ~/images
[stack@undercloud ~]$ tar -xpvf ironic-python-agent.tar -C ~/images/
[stack@undercloud ~]$ tar -xpvf overcloud-full.tar -C ~/images/
[stack@undercloud ~]$  source ~/stackrc
(undercloud) [stack@undercloud ~]$ openstack overcloud image upload --image-path ~/images/

Now view the uploaded images

(undercloud) [stack@undercloud ~]$ openstack image list

| ID                                   | Name                   | Status |
| 003300db-bbe1-4fc3-af39-bca9f56cc169 | bm-deploy-kernel       | active |
| 1a1d7ddf-9287-40fb-aea5-3aacf41e76a2 | bm-deploy-ramdisk      | active |
| be978ecb-2d33-4faf-80c0-8cb0625f1a45 | overcloud-full         | active |
| 0c0c74bc-0b0f-4324-81b4-e0abeed9455e | overcloud-full-initrd  | active |
| 0bf28731-d645-401f-9557-f24b3b8a6912 | overcloud-full-vmlinuz | active |
(undercloud) [stack@undercloud ~]$
Step:2 Add DNS Server in the undercloud network

Use below openstack command to view the subnet

(undercloud) [stack@undercloud ~]$ openstack subnet list
| ID                                   | Name            | Network                              | Subnet           |
| b3c8033d-ea58-44f3-8de1-5d5e29cad74b | ctlplane-subnet | fe1c940b-7f89-428a-86e1-2d134ce8d807 | |
(undercloud) [stack@undercloud ~]$ openstack subnet show  b3c8033d-ea58-44f3-8de1-5d5e29cad74b

Use below command to add dns server

(undercloud) [stack@undercloud ~]$ neutron subnet-update  b3c8033d-ea58-44f3-8de1-5d5e29cad74b --dns-nameserver

Now verify whether DNS server has been added or not

(undercloud) [stack@undercloud ~]$ openstack subnet show b3c8033d-ea58-44f3-8de1-5d5e29cad74b

Output would be something like below

Step:3 Create VMs for Overcloud’s Controller & Compute

Go to physical server or KVM hypervisor and define two VMs for compute and One for controller node.

Use below commands to create qcow2 image for controller and compute VMs.

[root@kvm-hypervisor ~]# cd /var/lib/libvirt/images/
[root@kvm-hypervisor images]# qemu-img create -f qcow2 -o preallocation=metadata overcloud-controller.qcow2 60G
[root@kvm-hypervisor images]# qemu-img create -f qcow2 -o preallocation=metadata overcloud-compute1.qcow2 60G
[root@kvm-hypervisor images]# qemu-img create -f qcow2 -o preallocation=metadata overcloud-compute2.qcow2 60G
[root@kvm-hypervisor images]# chown qemu:qemu overcloud-*

Use below Virt-install and virsh define command to create and define overcloud vms in KVM hypervisor,

Note: Change RAM, vcpu and CPU family that suits to your environment

[root@kvm-hypervisor ~]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-controller.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name overcloud-controller --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-controller.xml
[root@kvm-hypervisor ~]#
[root@kvm-hypervisor ~]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name overcloud-compute1 --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-compute1.xml
[root@kvm-hypervisor ~]#
[root@kvm-hypervisor ~]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-compute2.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name overcloud-compute2 --cpu Haswell,+vmx --dry-run --print-xml > /tmp/overcloud-compute2.xml
[root@kvm-hypervisor ~]#
[root@kvm-hypervisor ~]# virsh define --file /tmp/overcloud-controller.xml
[root@kvm-hypervisor ~]# virsh define --file /tmp/overcloud-compute1.xml
[root@kvm-hypervisor ~]# virsh define --file /tmp/overcloud-compute2.xml

Verify the VMs status using virsh list command,

[root@kvm-hypervisor ~]# virsh list --all | grep overcloud*
 -     overcloud-compute1             shut off
 -     overcloud-compute2             shut off
 -     overcloud-controller           shut off
[root@kvm-hypervisor ~]#
Step:4 Install and Configure vbmc (Virtual BMC) on undercloud

Vbmc is power management tool for virtual machines, VMs can be managed via ipmitool.

Using vbmc we can power off, power on and also verify the power status of a VM. We require vbmc as undercloud will require to power on / off  VMs during the deployment.

Note: vbmc is the replacement of pxe_ssh as pxe_ssh is depreciated now.

Run below yum install command to install virtualbmc,

[stack@undercloud ~]$ sudo yum install python-virtualbmc -y

Exchange the ssh keys from cloudcloud vm to physical server (KVM hypervisor)

[stack@undercloud ~]$ ssh-copy-id root@

Add the VMs to vbmc using the following commands, In my case libvirt-uri is “qemu+ssh://root@

[stack@undercloud ~]$ vbmc add overcloud-compute1 --port 6001 --username admin --password password --libvirt-uri qemu+ssh://root@
[stack@undercloud ~]$ vbmc start overcloud-compute1
[stack@undercloud ~]$ vbmc add overcloud-compute2 --port 6002 --username admin --password password --libvirt-uri qemu+ssh://root@
[stack@undercloud ~]$ vbmc start overcloud-compute2
[stack@undercloud ~]$ vbmc add overcloud-controller --port 6003 --username admin --password password --libvirt-uri qemu+ssh://root@
[stack@undercloud ~]$ vbmc start overcloud-controller

Verify the VMs status and its ports,

[stack@undercloud ~]$ vbmc list
|     Domain name      |  Status | Address | Port |
|  overcloud-compute1  | running |    ::   | 6001 |
|  overcloud-compute2  | running |    ::   | 6002 |
| overcloud-controller | running |    ::   | 6003 |
[stack@undercloud ~]$

To view power status of VMs, use below command,

[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P password -H -p 6001 power status
Chassis Power is off
[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P password -H -p 6002 power status
Chassis Power is off
[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P password -H -p 6003 power status
Chassis Power is off
[stack@undercloud ~]$
Step:5 Create and Import overcloud nodes inventory via json file

Let’s create a inventory file(json), it will include the details of overcloud servers (Controllers and Compute).

First capture mac address of over cloud nodes, for this go the kvm hypervisor run the below commands

[root@kvm-hypervisor ~]# virsh domiflist overcloud-compute1 | grep provisioning
-          network    provisioning virtio      52:54:00:08:63:bd
[root@kvm-hypervisor ~]# virsh domiflist overcloud-compute2 | grep provisioning
-          network    provisioning virtio      52:54:00:72:1d:21
[root@kvm-hypervisor ~]# virsh domiflist overcloud-controller | grep provisioning
-          network    provisioning virtio      52:54:00:0a:dd:57
[root@kvm-hypervisor ~]#

Now create a json file with name “overcloud-stackenv.json”

[stack@undercloud ~]$ vi overcloud-stackenv.json
  "nodes": [
      "arch": "x86_64",
      "disk": "60",
      "memory": "8192",
      "name": "overcloud-compute1",
      "pm_user": "admin",
      "pm_addr": "",
      "pm_password": "password",
      "pm_port": "6001",
      "pm_type": "pxe_ipmitool",
      "mac": [
      "cpu": "2"
      "arch": "x86_64",
      "disk": "60",
      "memory": "8192",
      "name": "overcloud-compute2",
      "pm_user": "admin",
      "pm_addr": "",
      "pm_password": "password",
      "pm_port": "6002",
      "pm_type": "pxe_ipmitool",
      "mac": [
      "cpu": "2"
      "arch": "x86_64",
      "disk": "60",
      "memory": "8192",
      "name": "overcloud-controller",
      "pm_user": "admin",
      "pm_addr": "",
      "pm_password": "password",
      "pm_port": "6003",
      "pm_type": "pxe_ipmitool",
      "mac": [
      "cpu": "2"

Replace the mac address of the VMs that suits to your environment.

Import the Nodes and do the introspection using below command

[stack@undercloud ~]$ source stackrc
(undercloud) [stack@undercloud ~]$ openstack overcloud node import --introspect --provide overcloud-stackenv.json

Output of above command should be something like below:

View the overcloud node details using the below command and we have to make sure provisioning state of each node should be available:

(undercloud) [stack@undercloud ~]$ openstack baremetal node list
| UUID                                 | Name                 | Instance UUID | Power State | Provisioning State | Maintenance |
| 44884524-a959-4477-87f9-143f716f422b | overcloud-compute1   | None          | power off   | available          | False       |
| 445ced0a-d449-419e-8c43-e0f124017300 | overcloud-compute2   | None          | power off   | available          | False       |
| a625fdfa-9a18-4d7c-aa36-492575f19307 | overcloud-controller | None          | power off   | available          | False       |
(undercloud) [stack@undercloud ~]$
Set Roles or Profile to overcloud nodes:

To set the role to each overcloud node, use the below commands. VMs with name “overloud-compute1/2” will act as a openstack compute node and VM with name “overcloud-controller” will act as openstack compute node.

(undercloud) [stack@undercloud ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 44884524-a959-4477-87f9-143f716f422b
(undercloud) [stack@undercloud ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 445ced0a-d449-419e-8c43-e0f124017300
(undercloud) [stack@undercloud ~]$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' a625fdfa-9a18-4d7c-aa36-492575f19307

Now use below openstack command to verify the role of each node,

(undercloud) [stack@undercloud ~]$ openstack overcloud profiles list
| Node UUID                            | Node Name            | Provision State | Current Profile | Possible Profiles |
| 44884524-a959-4477-87f9-143f716f422b | overcloud-compute1   | available       | compute         |                   |
| 445ced0a-d449-419e-8c43-e0f124017300 | overcloud-compute2   | available       | compute         |                   |
| a625fdfa-9a18-4d7c-aa36-492575f19307 | overcloud-controller | available       | control         |                   |
(undercloud) [stack@undercloud ~]$
Step:6 Start deployment of Overcloud Nodes

As of now we have completed all the steps whatever is required for overcloud deployment from undercloud server,

Run the below openstack command from undercloud to start the deployment,

(undercloud) [stack@undercloud ~]$ openstack overcloud deploy --templates   --control-scale 1 --compute-scale 2 --control-flavor control --compute-flavor compute

In the above command we are using the options like “–compute-scale 2” and “–control-scale 1“, it means we will use two compute nodes and one controller node.

Please note that the above command will take approx. 40 to 50 minutes or more depending on hardware or vm performance. So, you have to wait until the above command will not finished

Output of the above command should be something like below:

Run the beneath command to view IP address of overcloud nodes

(undercloud) [stack@undercloud ~]$ nova list
| ID                                   | Name                    | Status | Task State | Power State | Networks                 |
| 8c1a556f-9f79-449b-ae15-d111a96b8349 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane= |
| 31e54540-79a3-4182-8ecc-6e0f8cd3db11 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane= |
| edab92ce-825f-48c0-ba83-1445572c15b9 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane= |
(undercloud) [stack@undercloud ~]$

Connect to Over Cloud nodes using ‘heat-admin‘ user:

(undercloud) [stack@undercloud ~]$ ssh heat-admin@
Last login: Tue Jan 16 14:32:55 2018 from gateway
[heat-admin@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]# hostname -f
[root@overcloud-controller-0 ~]#

Similarly we can connect to rest of the compute nodes

Once the overcloud has been deployed successfully, all the admin credentials are stored in file “overcloudrc” in stack user’s home directory

(undercloud) [stack@undercloud ~]$ cat ~/overcloudrc

Now try to access the Horizon Dashboard using the credentials mentioned in overcloudrc file.

Open the Web Browser and type the url:

This confirms that overcloud has been deployed successfully. Now Create Projects, Networks and upload cloud images and then you start creating Virtual machines. That’s all from this tutorial, please do share your feedback and comments.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview