After a long development cycle, ready to go! Fedora srvctl is ready to manage your virtual server farm!

Take a look at ..
srvctl 2.x @ Github
the srvctl online manual

Shared hosting made it possible for the web to be what it is today. However, time didn’t stop and software is evolving. Now-days, the concept of shared hosting is rather problematic. Lets say, there are over 100 mini websites on a shared host. Each of those websites is an Apache virtual host, connecting to a database – at least in the classic model. A bigger failure in any of the sites that crashes any of the shared programs will stop every site, completely. Also, if one of the sites gets hacked, the whole server can get exposed to the hackers.
Needless to say, that configuration is not a girl’s dream. Lucky wise, the experts been busy in the last years, and created the basics for a new technology called LXC. The kernel got extended with a namespace feature, and that makes it possible to create lightweight virtualization, and thus isolation. Feels familiar? You might know tools like VirtualBox, where you run one operating system in a window of another – virtually. In that particular case, virtualization is not really lightweight, as the whole hardware-logic has to be calculated for the virtual machine, – the VM. However, we – the open source guys – do not really want to run other operating systems.
To get rid of the shared hosting concept we actually could run several instances of the same operating system, well isolated form each-other, and the kernel namespace feature makes this possible. LXC is one of the new applications, that implements this concept of containers or virtual environments, – the VE’s. LXC 1.0 was released on 20th of February 2014, so its really fresh technology, and as it came out, I instantly started to work with it. After almost a full year of working with it, and using it for several months in production, the 2.x version is available as an rpm.

To install, use the following commands as root:

curl ftp://d250.hu/fedora-release/d250.repo > /etc/yum.repos.d/d250.repo
yum -y install srvctl

Once the rpm is installed you can start using it. The fist step would be to install tools and apply configuration settings for:
– web services
– mail services
– user settings

There are some requirements you should keep in mind before starting the installation.
It is recommended to set up key-based ssh authentication.

In order to run a proper server you will have to set up a powerful computer, with SSD’s,
set a static IP address, and obtain some signed certificates.
These files will be used if present, other wise they will be generated.

/root/ca-bundle.pem
/root/key.pem
/root/crt.pem

However, installation files can be re-configured, so skip this step if you don’t have any certificates yet.

To start the host-configuration procedure use:

srvctl update-install all

As srvctl is a bit similar to systemctl, and if used frequently they can be confused, so the shorthand-syntax for the command is simply:

sc command [mandatory argument] [optional argument]

To see all available commands, and the use syntax, simply use sc without any arguments, anytime, anywhere.

The first installation will generate a configuration file, that you can edit – I suggest you to use the midnight commander:

mcedit /etc/srvctl/config

The installation procedure will install and configure the host with:
– LXC – either from source, or from an rpm
– libvirt – for virtual networking
– Pound – the reverse proxy for http/https
– fail2ban – for security (optional)
– Postfix – e-mail MTA
– Perdition – POP / IMAP reverse proxy
– Bind – DNS server
– ClamAv – antivirus

In order to reverse-proxy on the SMTP protocol, saslauthd has to be patched.
This is done by downloading a precompiled patched version of saslauthd.
You might want to compile your own for uncommon architectures, eg, non x86_64.
The main problem is that perdition sends the “OK! Capabilities” message before the authentication OK.
saslauthd does not recognize this by default, and quits. The patch fixes it.

This software is under construction, and upgraded continuously.
If you plan using it please contact me for support!