PostgreSQL is an advanced open source Object-Relational Database Management System (or ORDBMS). It is an extensible and highly-scalable database system, meaning that it can handle loads ranging from single machine applications to enterprise web services with many concurrent users. PostgreSQL is transactional and ACID-compliant (Atomicity, Consistency, Isolation, Durability).
It supports a large part of the SQL standard, and offers many features including:
Multiversion concurrency control
As previously said, the PostgreSQL database system can be extended by its users. There are different ways to do this, like adding new functions, operators, data types, index methods, procedural languages, etc.
It is developed by the PostgreSQL Global Development Group and released under the terms of the PostgreSQL License.
PostgreSQL provides many ways to replicate a database. in this tutorial we will configure the Master/Slave replication, which is the process of syncing data between two database by copying from a database on a server (the master) to one on another server (the slave).
This configuration will be done on a server running Ubuntu 16.04.
PostgreSQL 9.6 installed on the Ubuntu 16.04 Servers
UFW (or Uncomplicated Firewall) is a tool to manage iptables based firewall on Ubuntu systems. Install it (on both servers) through apt by executing:
# apt-get install -y ufw
Next, add PostgreSQL and SSH service to the firewall. To do this, execute:
# ufw allow ssh
# ufw allow postgresql
Enable the firewall:
# ufw enable
Configure PostgreSQL Master Server
The master server will have reading and writing permissions to the database, and will be the one capable of performing data streaming to the slave server.
With a text editor, edit the PostgreSQL main configuration file, which is /etc/postgresql/9.6/main/postgresql.conf:
We have seen how to configure the PostgreSQL master/slave replication, by using two servers running Ubuntu 16.04. This is just one of the many replication capabilities provided by this advanced and fully open source database system.
It has many features, like:
Help desk chat
We will install Rocket.Chat on a Debian 9 server.
The first thing to do is to satisfy Rocket.Chat dependencies. Execute the following apt command:
# apt install build-essential graphicsmagick
Rocket.Chat works with MongoDB as database system. There aren’t already Debian 9 packages for MongoDB, so we will install it from the tarball.
The final step is to insert the following URL into a web browser: https://chat.example.com to register a new admin account and finish the graphical configuration.
There you have it! We’ve just explained how to install and configure your Rocket.Chat Server on a Debian 9 server using NGINX. This useful online communication program can help your team work more efficiently and with more collaboration!
When talking about databases, in general, we refer to two major families: RDBMS (Relational Database Management System), which use as user and application program interface a language named Structured Query Language (or SQL) and non-relational database management systems, or NoSQL databases. OrientDB is part of the second family.
Between the two models there is a huge difference in the way they consider (and store) data.
Relational Database Management Systems
In the relational model (like MySQL, or its fork, MariaDB), a database is a set of tables, each containing one or more data categories organized in columns. Each row of the DB contains a unique instance of data for categories defined by columns.
Just as an example, consider a table containing customers. Each row correspond to a customer, with columns for name, address, and every required information.
Another table could contain an order, with product, customer, date and everything else. A user of this DB can obtain a view that fits its needs, for example a report about customers that bought products in a specific range of prices.
NoSQL Database Management Systems
In the NoSQL (or Not only SQL) database management systems, databases are designed implementing different “formats” for data, like a document, key-value, graph and others. The database systems realized with this paradigm are built especially for large-scale database clusters, and huge web applications. Today, NoSQL databases are used by major companies like Google and Amazon.
This is a simple model pairing a unique key with a value. These systems are performant and highly scalable for caching. Examples include BerkeleyDB and MemcacheDB.
As the name predicts, these databases store data using graph models, meaning that data is organized as nodes and interconnections between them. This is a flexible model which can evolve over time and use. These systems are applied where there is the necessity of mapping relationships.
Examples are IBM Graphs and Neo4j and OrientDB.
OrientDB, as stated by the company behind it, is a multi-model NoSQL Database Management System that “combines the power of graphs with documents, key/value, reactive, object-oriented and geospatial models into one scalable, high-performance operational database“.
OrientDB has also support for SQL, with extensions to manipulate trees and graphs.
One server running CentOS 7
OpenJDK or Oracle Java installed on the server
This tutorial explains how to install and configure OrientDB Community on a server powered by CentOS 7.
Step 1 – Create a New User
First of all, create a new user to run OrientDB. Doing this will let to run the database on an “isolated environment”. To create a new user, execute the following command:
# adduser orientdb -d /opt/orientdb
Step 2 – Download OrientDB Binary Archive
At this point, download the OrientDB archive in the /opt/orientdb directory:
Note: at the time we write, 2.2.29 is the latest stable version.
Step 3 – Install OrientDB
Extract the downloaded archive:
# cd /opt/orientdb
# tar -xf orientdb.tar.gz
tar will extract the files in a directory named orientdb-community-importers-2.2.29. Move everything in /opt/orientdb:
# mv orientdb-community*/* .
Make the orientdb user the owner of the extracted files:
# chown -R orientdb:orientdb /opt/orientdb
Start OrientDB Server
Starting the OrientDB server requires the execution of the shell script contained in orientdb/bin/:
During the first start, this installer will display some information and will ask for an OrientDB root password:
| WARNING: FIRST RUN CONFIGURATION |
| This is the first time the server is running. Please type a |
| password of your choice for the 'root' user or leave it blank |
| to auto-generate it. |
| To avoid this message set the environment variable or JVM |
| setting ORIENTDB_ROOT_PASSWORD to the root password to use. |
Root password [BLANK=auto generate it]: ********
Please confirm the root password: ********
To stop OrientDB, hit Ctrl+C.
Create a systemd Service for OrientDB
Create a new ststemd service to easily manage OrientDB start and stop. With a text editor, create a new file:
What Is WebERP Accounting and Business Management System
WebWEP is a totally web based accounting and business management system. It is particularly suitable for distributed businesses in wholesale, distribution and manufacturing. WebERP can be customized with third party complementary components and can also function as a web-shop or Retail Management System.
According to the project’s web site “the growth of webERP adoption has been entirely through word of mouth testimony – there has never been a marketing or advertising push to “sell” webERP. Of course there are no funds nor commercial incentive to do so for free software. This growth is built on reputation and solid practical functionality that works as tried and tested by an increasing number of businesses.”
WebERP Main Features
WebERP has many features:
Runs on any web-server that can accommodate PHP – can use an ISP instead of having/maintaining own server
Produces reports to Portable Document Format – PDF for accurate positioning of text
All reports and scripts easily modifiable PHP text
All processing on the server and no installation required on client machines
Fully utf-8 compliant. PDF reports produced using adobe CIF fonts for lightweight PDFs using utf-8 character set for any language
Multi-theme – each user can see the interface in their preferred graphical theme
The underlying code of the system is written in a way so as to maximise it’s readability for those new to PHP coding. The idea being that business users will be able to administer and adapt the system to exactly suit their needs.
Users can be defined with access to only certain options using a role based model
Options applicable to certain roles can be graphically configured and users defined as fulfilling a given role.
Incorrect entry of password (more than 3 times) blocks the account until reset by the System Administrator. This prevents password crackers from breaking the security.
Pages can be encrypted using SSL and webERP can be configured to only display pages using SSL to ensure that all information passing over the internet is encrypted.
Very flexible taxation options suitable for Canada, US, South Africa, UK, Australia, NZ and most other countries
Tax rates dependent on the type of product – using tax categories
Tax rates dependent on the location of the warehouse dispatched from
Tax rates dependent on the location of the customer
Multiple taxes payable to different tax authorities
Each tax posted to different user-definable general ledger accounts – if linked to AR/AP
In this guide we will show how to install WebERP on an Ubuntu 16.04 server with an installed LAMP stack.
Install a LAMP stack (you can follow our guide), and then go on with the MariaDB configuration.
We need to create a new database and user for WebERP. First of all, login to MariaDB shell:
$ mysql -u root -p
Create a new user for WebERP. In this guide we will create the weberp_usr user. Execute the following MariaDB query:
MariaDB [(none)]>CREATE USER 'weberp_usr'@'localhost' IDENTIFIED BY 'usr_strong_password';
Next, create a new database. We will name it weberpdb:
MariaDB [(none)]>CREATE DATABASE weberpdb;
Grant all the privileges to weberp_usr user on the new database:
MariaDB [(none)]>GRANT ALL PRIVILEGES ON weberpdb.* TO 'weberp_usr'@'localhost';
Docker Compose is a tool for running multi-container Docker applications. To configure an application’s services with Compose we use a configuration file, and then, executing a single command, it is possible to create and start all the services specified in the configuration.
Docker Compose can be useful for many different projects, including:
Development: with the Compose command line tools we create (and interact with) an isolated environment which will host the application being developed.
By using the Compose file, developers document and configure all of the application’s service dependencies.
Automated testing: this use case requires an environment for running tests in. Compose provides a convenient way to manage isolated testing environments for a test suite. The full environment is defined in the Compose file.
Docker Compose was made on the Fig source code, a community project now unused.
In this tutorial we will see how to install Docker Compose on a CentOS 7 server.
First of all, install Docker. The easiest way to install it is to download an installation script provided by the Docker project:
$ wget -qO- https://get.docker.com/ | sh
One required step is to configure correctly the user for Docker. In particular, add the user to the docker group, by executing the following command:
# usermod -aG docker $(whoami)
Log out and log in again to update the user groups list.
Next, enable Docker to start at boot time:
# systemctl enable docker
# systemctl start docker
Install Docker Compose
Once Docker has been installed, install Docker Compose. First of all, install the EPEL repository by executing the command:
# yum install epel-release
Next, install python-pip:
# yum install -y python-pip
At this point, it is possible to install Docker Compose by executing a pip command:
# pip install docker-compose
Upgrade also all the Python packages on CentOS 7:
# yum upgrade python*
Check Docker Compose version with the following command:
$ docker-compose -v
The output should be something like this:
docker'compose version 1.16.1, build 6d1ac219
Testing Docker Compose
The Docker Hub includes a Hello World image for demonstration purposes, illustrating the configuration required to run a container with Docker Compose.
Create a new directory and move into it:
$ mkdir hello-world
$ cd hello-world
Create a new YAML file:
$ $EDITOR docker-compose.yml
In this file paste the following content:
Note: the first line is used as part of the container name.
Save and exit.
Run the container
Next, execute the following command in the hello-world directory:
$ sudo docker-compose up
If everything is correct, this should be the output shown by Compose:
Pulling unixmen-compose-test (hello-world:latest)...
latest: Pulling from library/hello-world
b04784fba78d: Pull complete
Status: Downloaded newer image for hello-world:latest
Creating helloworld_unixmen-compose-test_1 ...
Creating helloworld_unixmen-compose-test_1 ... done
Attaching to helloworld_unixmen-compose-test_1
unixmen-compose-test_1 | Hello from Docker!
unixmen-compose-test_1 | This message shows that your installation appears to be working correctly.
unixmen-compose-test_1 | To generate this message, Docker took the following steps:
unixmen-compose-test_1 | 1. The Docker client contacted the Docker daemon.
unixmen-compose-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
unixmen-compose-test_1 | 3. The Docker daemon created a new container from that image which runs the
unixmen-compose-test_1 | executable that produces the output you are currently reading.
unixmen-compose-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
unixmen-compose-test_1 | to your terminal.
unixmen-compose-test_1 | To try something more ambitious, you can run an Ubuntu container with:
unixmen-compose-test_1 | $ docker run -it ubuntu bash
unixmen-compose-test_1 | Share images, automate workflows, and more with a free Docker ID:
unixmen-compose-test_1 | https://cloud.docker.com/
unixmen-compose-test_1 | For more examples and ideas, visit:
unixmen-compose-test_1 | https://docs.docker.com/engine/userguide/
helloworld_unixmen-compose-test_1 exited with code 0
Docker containers only run as long as the command is active, so the container will stop when the test finishes running.
In this tutorial we have seen how to install and test Docker Compose on a CentOS 7 server, and used the Compose file in the YAML format.
Sensu is a free and open source tool for composing a monitoring system. It is entirely written in Ruby. It uses RabbitMQ to handle messages and Redis to store its data.
Sensu focuses on composability and extensibility, allowing to reuse monitoring checks and plugins from tools like Nagios and Zabbix.
This framework was designed to work with software like Puppet, Chef and Ansible, and it does not required additional workflow.
As stated in the documentation, “all versions of Sensu (including Sensu Enterprise) are based on the same core components and functionality, which are provided by the Sensu open-source software project and collectively referred to as Sensu Core. Sensu Core provides multiple processes, including the Sensu server (sensu-server), Sensu API (sensu-api), and Sensu client (sensu-client).
Installer packages are available for most modern operating systems via native installer packages (e.g. .deb, .rpm, .msi, .pkg, etc) which are available for download from the Sensu website, and from package manager repositories for APT (for Ubuntu/Debian systems), and YUM (for RHEL/CentOS).”
In this tutorial we will show how to install Sensu on an Ubuntu 16.04 server.
RabbitMQ runs on top of Erlang, so, first of all, we will install Erlang in our server.
Erlang is not available in Ubuntu repositories, but provides its own. Let’s add it and the Erlang public key to the trusted key list, by executing the following command:
The next step is to install Sensu. It is not available in Ubuntu repositories, but, as we said in the introduction, the project provides its own repository for Ubuntu. Add Sensu public key and repository to apt repositories list.
First of all, add the key, by executing the following gpgcommand:
Next, we need to add the Sensu repository. Create a sensu.list file in /etc/apt/sources.list.d directory:
# $EDITOR /etc/apt/sources.list.d/sensu.list
In this file, paste the following content:
deb https://sensu.global.ssl.fastly.net/apt xenial main
Save and exit. Update the repositories list:
# apt-get update
Finally, install Sensu:
# apt-get install sensu
Once the installation is finished, we need to configure Sensu for using with RabbitMQ and Redis. By default, Sensu will load configuration from /etc/sensu/conf.d/ directory. This is where we will create the RabbitMQ, Redis, and Api configuration files.
For the RabbitMQ part, create a rabbitmq.json file in /etc/sensu/conf.d:
# $EDITOR /etc/sensu/conf.d/rabbitmq.json
To connect Sensu to RabbitMQ, paste the following content in the opened file:
By default, after the installation Sensu does not provide the Dashboard to monitor Sensu through a user-friendly web interface.
The framework was originally designed as an API-based monitoring solution, enabling operations teams to compose monitoring solutions where Sensu provides the monitoring instrumentation, collection of telemetry data, scalable event processing, comprehensive APIs and plugins for sending data to dedicated dashboard solutions. However, as the project matured, it was natural to work on a monitoring interface. As a result, today there are two dashboards: Uchiwa (for Sensu Core users), and the Sensu Enterprise Dashboard (for Sensu Enterprise customers).
In this tutorial, we will install the Uchiwa Dashboard.
First, add the public key by executing the following command:
Starting with Chrome 56, the browser developed by Google marks non-secure pages containing password and credit card input fields as Not Secure in the URL bar. It was almost one year ago, when the Mountain View giant announced this choice.
Of course, everybody knows that secure is better then insecure; but in this case, the big problem with HTTP is that it lacks a system for protecting communications between clients and servers. This exposes data to different kinds of attacks, for instance, the “Man in the middle” (MIM), in which the attacker intercepts your data. If you are using some transaction system with your bank, using credit card infos, or just entering a password to log in to a web site, this can become very dangerous.
This is why HTTPS exists (HTTP over TLS, or, HTTP over SSL, or, HTTP Secure).
If you are on Unixmen, you probably know what this means: SSL/TLS ensures encrypted connections.
So, if your job is to keep a web server up and running on, you should switch to HTTPS.
To encrypt the traffic between server and client, web servers use SSL certificates. Let’s Encrypt helps in obtaining and installing a trusted certificate for free.
In this tutorial we will see how to secure an Apache Web Server on Ubuntu 16.04 using Let’s Encrypt.
Install Let’s Encrypt
Let’s Encrypt provides a client software which will fetch certificates almost automatically. This software is called Certbot, and the developers have their Ubuntu repository with up to date versions.
So, first of all, we will add the repository:
# add-apt-repository ppa:certbot/certbot
Next, update apt packages list:
# apt-get update
At this point, install Certbot:
# apt-get install python-certbot-apache
Install SSL Certificate
Once the Certbot client is installed, we can use it to obtain and install a new certificate for our server. It is possible to use a single certificate for many subdomains (or even domains). This can be done just passing all the domains as certbot argument.
Certbot will present a step-by-step process to customize certificate options, and to enter information like email address. This last one will be used for key recovery. During the process it is possible to choice between which protocol to enable: both HTTP and HTTPS or HTTPS alone, which means that all requests will be automatically redirected. Of course, the best choice is to use only HTTPS, unless there are serious reasons to use unencrypted traffic to your server.
To verify the status of the SSL certificate, just go to the following link with a browser:
Let’s Encrypt certificates last for 90 days, so it’s up to you to renew. Using Certbot, you can test the automatic renewal system with this command:
certbot renew --dry-run
If it works, you can add a cron or systemd job to manage automatic renewal.
We have seen how easy can be to install a SSL certificate on an Apache Web Server, running on top of Ubuntu 16.04, by using the software client provided by Let’s Encrypt. At this point, if you go with your browser to https://www.example.com or https://example.com you will see that the site will be correctly served through HTTPS.
TaskBoard is a free and open source software, inspired by the Kanban board, for keeping track of tasks.
Kanban is a technique for visualizing the flow of work and organizing projects. In particular, in software development it provides a visual process management system to help in deciding how to organize production.
As you can see in the image above, this software makes it easy to keep track visually of the evolution of your projects.
TaskBoard uses SQLite as a database, which means that we can use it without having to install MySQL or other “big” databases.
SQLite can be installed with the following yum command:
# yum install sqlite
TaskBoard installation is really very easy, as anticipated by the lengthy features list presented in the introduction. In fact, it just requires that you download and extract the TaskBoard archive. Go to the Apache web root directory:
WordPress is a famous content management system based on PHP and MySQL, distributed under the terms of the GNU GPLv2 (or later). In most cases it is installed by using Apache or NGINX as web servers, or, as we explained in this tutorial, it can run on an isolated environment like Docker containers.
Alongside these choices, there is a new web server which is rapidly gaining popularity: Caddy.
Caddy (or Caddy web server), is an open source, HTTP/2 web server which enables HTTPS by default, without requiring external configuration. Caddy also has a strong integration with Let’s Encrypt.
This tutorial explains how to install and configure WordPress on top of your Caddy web server, installed following our guide.
As we said in the introduction, WordPress requires a web server, MySQL and PHP. First of all, install PHP and the extensions required by WordPress, by executing the following command:
Verify that the PHP was correctly installed by checking its version:
$ php -v
Install and Configure MariaDB
MariaDB is also available in the repository, so just use apt:
# apt-get install mariadb-client mariadb-server
MariaDB is a MySQL fork, and it uses its name for the systemd service:
# systemctl start mysql
Set MariaDB root password to secure your database:
You will be asked for the following configuration parameters:
Enter current password for root (enter for none): PRESS ENTER
Set root password? [Y/n] Y
ENTER YOUR PASSWORD
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y
Once that step is complete you can access the MariaDB database with your password:
$ mysql -u root -p
Create New Database and User
Start the MariaDB shell:
$ mysql -u root -p
Use the MariaDB prompt to create a new database for WordPress. In this tutorial, we use wordpressdb as the database name, and wordpressusr as the username for the WP installation. So our code looks like this:
mysql> CREATE DATABASE wordpressdbDEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
mysql> CREATE USER wordpressusr@localhost IDENTIFIED BY 'usr_strong_password'; mysql> GRANT ALL PRIVILEGES ON wordpressdb.* to wordpressusr@localhost IDENTIFIED BY 'usr_strong_password';
Next, you can flush privileges and exit:
mysql> FLUSH PRIVILEGES;
Downloading and installing WordPress is quite an easy process, which requires executing just the following commands:
# cd /var/www
# wget wordpress.org/latest.zip
# unzip latest.zip
Change WordPress permissions with:
# chown -R www-data:www-data wordpress
Rename the WordPress config file and edit it:
# cd wordpress
# mv wp-config-sample.php wp-config.php
# $EDITOR wp-config.php
Here, change the database informations, using those specified during the MariaDB configuration process:
Note: firstname.lastname@example.org is the email address that will be used for Let’s Encrypt certificate request.
# systemctl restart caddy
As a last step, with a web browser, go to your website. This will start the WordPress GUI installation wizard which will finish the installation process and give you access to the WordPress dashboard.
At the end of the previous steps, a new WordPress instance will run on top of this new, tiny and powerful web server. Caddy will require certificates from Let’s Encrypt and enable automatically HTTPS connections, without any other manual configuration.