Category Archives: Intermediate

Patch for SquidAnalyzer 6.6 to use standard date format

SquidAnalyzer is a great tool to visualize statistics for the Squid web proxy. Unfortunately up until version 6.6 there is no way to configure the date format used to parse Squid logfiles.

By default Squid uses a Unix timestamp for its access log which is hard to read. If you change that date format to a more readable string, SquidAnalyzer does not work.

Here is a patch that makes SquidAnalyzer 6.6 recognize the following date format:
%{%Y-%m-%d %H:%M:%S}tl %6tr %>a %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt

This is basically the same format as the native squid_localtime format, except the date is displayed human readable (year-month-day hour:minute:second).

The patch for version 6.6 must be applied to the file before installation:

--- /usr/local/src/squidanalyzer-6.6/   2017-07-23 10:56:28.379684965 +0200
+++    2017-07-23 11:43:43.336149777 +0200
@@ -404,6 +404,8 @@
my $ip_regexp = qr/^([a-fA-F0-9\.\:]+)$/;
my $cidr_regex = qr/^[a-fA-F0-9\.\:]+\/\d+$/;

+# Patch: %{%Y-%m-%d %H:%M:%S}tl %6tr %>a %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt
+my $de_format_regex1 = qr/^(\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})\s+(\d+)\s+([^\s]+)\s+([^\s]+)\s+(\d+)\s+([^\s]+)\s+(.*)/;
# Native log format squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %un %Sh/%<A %mt
my $native_format_regex1 = qr/^(\d+\.\d{3})\s+(\d+)\s+([^\s]+)\s+([^\s]+)\s+(\d+)\s+([^\s]+)\s+(.*)/;
my $native_format_regex2 = qr/^([^\s]+?)\s+([^\s]+)\s+([^\s]+\/[^\s]+)\s+([^\s]+)\s*/;
@@ -535,8 +537,19 @@

my $time = 0;
my $tz = ((0-$self->{TimeZone})*3600);
-       # Squid native format
-       if ( $line =~ $native_format_regex1 ) {
+        # Patch
+        if ( $line =~ $de_format_regex1 ) {
+                $time = $1;
+                $time =~ /(\d{4})-(\d{2})-(\d{2})\s+(\d{2}):(\d{2}):(\d{2})/;
+                if (!$self->{TimeZone}) {
+                        $time = timelocal_nocheck($6, $5, $4, $3, $2 - 1, $1 - 1900);
+                } else {
+                        $time = timegm_nocheck($6, $5, $4, $3, $2 - 1, $1 - 1900) + $tz;
+                }
+                $self->{is_squidguard_log} = 0;
+                $self->{is_ufdbguard_log} = 0;
+        # Squid native format
+        } elsif ( $line =~ $native_format_regex1 ) {
$time = $1;
$self->{is_squidguard_log} = 0;
$self->{is_ufdbguard_log} = 0;
@@ -596,6 +609,11 @@
$self->{is_ufdbguard_log} = 1;
$self->{is_squidguard_log} = 0;
+                # Patch
+                } elsif ( $line =~ $de_format_regex1 ) {
+                        $self->{is_squidguard_log} = 0;
+                        $self->{is_ufdbguard_log} = 0;
+                        last;
# Squid native format
} elsif ( $line =~ $native_format_regex1 ) {
$self->{is_squidguard_log} = 0;
@@ -1237,7 +1255,23 @@
#logformat combined   %>a %[ui %[un [%tl] "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
# Parse log with format: time elapsed client code/status bytes method URL rfc931 peerstatus/peerhost mime_type
my $format = 'native';
-               if ( !$self->{is_squidguard_log} && !$self->{is_ufdbguard_log} && ($line =~ $native_format_regex1) ) {
+                # Patch
+                if ( !$self->{is_squidguard_log} && !$self->{is_ufdbguard_log} && ($line =~ $de_format_regex1) ) {
+                        $time = $1;
+                        #$time += $tz;
+                        $elapsed = abs($2);
+                        $client_ip = $3;
+                        $code = $4;
+                        $bytes = $5;
+                        $method = $6;
+                        $line = $7;
+                        $time =~ /(\d{4})-(\d{2})-(\d{2})\s+(\d{2}):(\d{2}):(\d{2})/;
+                        if (!$self->{TimeZone}) {
+                                $time = timelocal_nocheck($6, $5, $4, $3, $2 - 1, $1 - 1900);
+                        } else {
+                                $time = timegm_nocheck($6, $5, $4, $3, $2 - 1, $1 - 1900) + $tz;
+                        }
+                } elsif ( !$self->{is_squidguard_log} && !$self->{is_ufdbguard_log} && ($line =~ $native_format_regex1) ) {
$time = $1;
$time += $tz;
$elapsed = abs($2);


Upgrading Debian 8 Jessie to Debian 9 Stretch

If configuration files are changed the old version will usually be copied to a backup file (*.dpkg-old). Nevertheless it is a good idea to make a system backup yourself before upgrading.

Description how to upgrade



  • Device names stay the same (eth0, ...). Debian 9 only uses a new naming scheme for new installations.

Bacula 7.4.4

  • So far I had no problems to connect bacula-fd v7.4.4 to a bacula server v7.0.5

FreeRadius 3.0.12

  • Major upgrade from version 2. The configuration will not be automatically merged. You have to do this manually.
  • Basic configuration stays pretty much the same. Some configuration variables have been renamed or moved to a different position.
  • New configuration directories:

ejabberd 16.09

Postfix 3.1.4

  • Had no problems with a basic configuration and a couple of virtual mailbox domains.

amavisd-new 2.10.1-4

  • Almost no changes from previous version 2.10.1-2

spamassassin 3.4.1

  • No need to change anything if you have a default installation


  • New user/group "courier". File permissions need to be adjusted:
  • Some configuration changes (pid file, certificates location, etc.)

ntp 4.2.8p10

  • No longer subject to DRDoS Amplification Attack
  • Option "limited" added (to default restriction in configuration file)
  • Source restriction added (to configuration file)

OpenSSH 7.4

  • Major upgrade from version 6.7
  • No longer subject to ssh client roaming problem (s. Qualys Security Advisory)
  • New "AddKeysToAgent" client parameter (a private key that is used during authentication will be added to ssh-agent)
  • Default for "PermitRootLogin" changed from "yes" to "prohibit-password".
  • Default for "UsePrivilegeSeparation" changed from "yes" to "sandbox"
  • Default for "UseDNS" changed from "yes" to "no"
  • New option to require 2 different public keys for authentication; may be used for two-man rule / four-eyes principle (s. "AuthenticationMethods=publickey,publickey")

BIND9 network ports

List of network ports that the DNS nameserver ISC BIND v9.10 listens to by default:

Port NumberUDP / TCPDescription
53UDPstandard port to respond to name queries
53TCPused for master/slave zone transfers or if query answers don't fit in UDP packet
953TCPcommunicate with rndc client utility
2200TCPstatistics channel (built-in webserver to display statistics page)

Connect to OpenLDAP server with PHP5 (CentOS 7)

Here is a short PHP sample script of how to connect to an OpenLDAP server using the secure LDAPS protocol (port 636).

PHP uses the LDAP settings from the LDAP base packages. in the case of CentOS 7 they are configured in /etc/openldap/ldap.conf . Following two entries are the only ones that are important:

TLS_CACERTDIR   /etc/openldap/certs
TLS_REQCERT     demand

The first line gives the location of the public CA certificate that was used to sign the LDAP server certificate. The second line rejects all invalid certificates. To make the first line work, we need to import the public CA certificate into the local NSS database. For that we use the certutil command line utility (root privileges required):

certutil -A -n ldap -t "C,," -d dbm:/etc/openldap/certs -i /etc/ssl/certs/ldap-ca.pem
certutil -L -d dbm:/etc/openldap/certs

The first line imports an existing CA certificate into the database (with the nickname "ldap" which should be unique). The certificate database uses the old Berkeley DB format, so we need to prefix the location with "dbm:". There are 2 files that make up the certificate database:

  • cert8.db
  • key3.db

The second line of the code example merely lists all existing database entries. It should now include our new CA certificate for LDAP connections:

[root@centos7]# certutil -L -d dbm:/etc/openldap/certs 
Certificate Nickname                                         Trust Attributes 
ldap                                                         C,,

Notice the 3 trust attributes for our new CA certificate. In our case the first field needs to include the trust "C". For a description of all possible values, see "man certutil".

Now that we installed the CA certificate for LDAPS connections, we can actually try to make a connection to the LDAP server with PHP5.

$server = "ldaps://"; 
echo "Connecting to $server ...\n"; 

#ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, 7); 
$ldapconn = ldap_connect($server, 636) 
        or die("ERROR: Unable to connect to $server\n"); 
ldap_set_option($ldapconn, LDAP_OPT_PROTOCOL_VERSION, 3); 
$ldapbind = ldap_bind($ldapconn) 
        or die("ERROR: Unable to bind to $server\n"); 
echo "Ok, now connected to $server\n"; 

Here we make an anonymous connection to the LDAP server. You can also provide a username and password for the ldap_bind() function call. Now call this script from the command line (needs yum package "php-cli"):

$ php php-test.php
Connecting to ldaps:// ... 
Ok, now connected to ldaps://

Important things to note:

  • Call ldap_set_option() to activate debug output.
  • ldap_connect() does not actually connect to the LDAP server. It only initializes internal data structures and variables. The network connection to port 636 will be made by ldap_bind().
  • You need to explicitly set the LDAP protocol version to 3. Otherwise version 2 will be used, which will not work with contemporary OpenLDAP servers.

Connect to OpenLDAP server with Perl

This little code example in Perl shows how to connect to an OpenLDAP server using the ldaps protocol. It tries several servers and uses the first one it can connect to.

use strict;

use Net::LDAP;
use Net::LDAP::Extension::WhoAmI;
# LDAPS is basically the same as the LDAP-Module using ldaps:// URIs
#use Net::LDAPS;

my $userName = 'USERNAME';
my $passWord = 'PASSWORD';

my @Servers = ("server1", "server2", "server3");
my $ldap = undef;

# Code = 34, Name: LDAP_INVALID_DN_SYNTAX (dn is not a full path)
# Code = 48, Name: LDAP_INAPPROPRIATE_AUTH (empty dn or password)
# Code = 49, Name: LDAP_INVALID_CREDENTIALS (wrong dn or password)
sub lErr {
  my $mesg = shift;
  printf STDERR "Error: %s\n", $mesg->error();
  printf STDERR "Error Code: %s\n", $mesg->code();
  printf STDERR "Error Name: %s\n", $mesg->error_name();
  printf STDERR "Error Text: %s", $mesg->error_text();
  printf STDERR "Error Description: %s\n", $mesg->error_desc();
  printf STDERR "Server Error: %s\n", $mesg->server_error();

foreach my $server (@Servers) {
  $ldap = Net::LDAP->new("ldaps://$server:636",
  verify => 'require',
  inet4 => 1,
  timeout => 3,
  cafile => '/etc/ssl/certs/ldap_slapd_cacert.pem' );

  if($ldap) {
    print "Ok connecting to $server\n";
  else {
    print "Error connecting to $server: $@\n";

if($ldap) {
  print "Now connected to " . $ldap->host() . "\n";
else {
  exit -1;
my $mesg = $ldap->bind("uid=$userName,ou=People,dc=example,dc=com",
  password => "$passWord");
if($mesg->is_error()) {
  exit $mesg->code;

# Using $ldap->bind again after $ldap->unbind doesn't work

There is also an option to connect to an array of servers with only one function call. It basically does the same thing: Looping through a list of servers and use the first successful connection. But you have to be careful, there is a known bug if "verify" is set to "optional" (s.


Secure download of RHEL ISO installation images

You will probably download the RHEL ISO image from within the Red Hat Customer Portal and therefore use an encrypted HTTPS connection (download URL is The SHA-256 checksums for the ISO images are on the download page.

Red Hat also provides a page with all GPG keys they use for signing their software packages. In Customer Portal, go to "Security" -> "Product Signing (GPG) Keys)" (

There are download links for the public keys ( The keys are also available on the keyserver . So you can use the following command to import the main Red Hat key into your GPG keyring:

# gpg --recv-keys fd431d51
# gpg --fingerprint -k fd431d51

Compare the fingerprint of the Red Hat public key with the fingerprint on the Customer Portal website. You cannot use the GPG key for verifying the ISO files, but it is useful for e.g. verifying RPM package updates that you can download directly from Red Hat websites and that are not installed the usual way via an official yum repository.



HSTS with Apache and Chrome

  • HSTS (HTTP Strict Transport Security) prevents your browser from visiting a website over an unencrypted "http://..." url. Instead you have to use the encrypted "https://..." url, otherwise your browser refuses to load the website.
    Either the webserver of the website you are visiting suggests the use of HSTS to your browser by sending an additional HTTP header, or you manually configure a certain website yourself in your browser.
  • Apache requires the module mod_headers to make the necessary changes to the HTTP headers.
  • Add this to your Apache vhost configuration:
    Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    For a description of all options see RFC:
    The "preload" option is not part of the RFC. It just signals that you want your site to be added to the browser builtin list of HSTS sites (see below). If you do not plan to get listed, you may omit this option.
  • Visit the site at least once using HTTPS in your Chrome browser ("trust on first use"). The HSTS configuration of the site (provided by the Apache STS header) will be added to an internal Chrome list. HSTS really depends on this internal browser list. Webservers only send an additional HTTP header that webbrowsers may or may not honor.
  • Add, delete or check websites in your Chrome browser:
    Changes take place immediately without having to restart Chrome.
    You can add sites even if they don't send the special STS header.
    You can combine those entries with PKP (Public Key Pinning) by providing fingerprints for all accepted public keys of a website.
  • Chrome ships with a builtin list of sites that require HSTS. If you run a large public website, you might want to get included in that list:
    These builtin sites get listed as "static_..." in your internal Chrome browser list. All other sites (added manually or by honoring the STS header) get listed as "dynamic_...".
  • You cannot delete site entries from the builtin list (assuming that you use the official Chrome browser and that it has not been manipulated).
  • This is the message you get in Chrome when HSTS is violated on a website (in this case the certificate of has expired and therefore Chrome refuses to establish the HTTPS connection):
You cannot visit right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later.

Important things to note:

  • Even for HSTS enabled sites, you may still be able to type in the "http://..." URL in the browser address bar. Chrome automatically recognizes the URL and redirects you to the corresponding "https://..." URL.
    This is different from traditional HTTP redirects, because no unencrypted traffic is sent over the network. The redirection already takes place in the browser.
    The downside of this behaviour is that it makes it hard for people to identify if a website is using HSTS or simply redirects all traffic from HTTP/port 80 to HTTPS/port 443 (HTTP status codes 3xx).
  • Many browser plugins now offer the same functionality (redirect some or all website addresses to HTTPS URLs).
  • Maybe some day HTTPS URLs become the default in webbrowsers. If you type a URL in the address bar, or select a URL without the leading "http(s)://", the browser first redirects you automatically to the HTTPS URL. Only if there is no connection possible, you will receive a warning message and get redirected to the HTTP URL. Let's make HTTPS the default in browsers and accept HTTP only for a small number of exceptions.
    No green lock icon for SSL encrypted websites, just red unlock icons for unencrypted websites.



Secure download of Ubuntu ISO installation images

Please follow the instructions on this page:

There is another website, but it doesn't use SSL / HTTPS:

The procedure is the same as I have already described for CentOS or Debian in my previous posts:

  1. Import the GPG-key and verify its fingerprint.
  2. Download the checksum file and verify its signature with the GPG-key.
  3. Check the iso file with the checksum file.

Again the fingerprint of the GPG-key is on a SSL encrypted website where you have to check the website certificate and its root CA.

Firefox ships with its own set of root CAs ("Builtin Object Token" as the Security Device in advanced preference settings). Here is a list of all root CAs included in Firefox along with their fingerprints:

Builtin root CAs are hardcoded in /usr/lib/firefox/

CAs marked as "Software Security Device" are usually intermediate certificates that are downloaded from websites and stored locally. These CAs that are not builtin are either stored on a PKCS#11 compatible smartcard attached to your PC/laptop or saved to your home directory:
certutil -d ~/.mozilla/firefox/xxx.default -L

Chromium / Google Chrome does not ship with its own CA list but uses the CAs from the underlying operating system:

On Ubuntu 16.04 these CAs are hardcoded in /usr/lib/x86_64-linux-gnu/nss/ which is part of the package "libnss3".

Important things to note:

  • Verification of ISO images is based on GPG-keys which have to be checked by its fingerprints. You can get that fingerprint from a SSL secured website.
  • The security of a website depends on the root CA which is used to sign the website certificate. These CAs are stored locally in different locations based on the browser you are using.
  • Neither Firefox nor Chromium / Google Chrome are using CAs from the package "ca-certificates".

Secure download for CentOS 7

The basic idea  for downloading a CentOS 7 installation image in a secure way is this:

  1. Download the CentOS public key from a public keyserver.
  2. By using that key you can verify the signature of the checksum file of the CentOS ISO image.
  3. With the checksum file you check the downloaded ISO image to see if it is the original file and has not been changed or tampered with.
[CentOS Public Key]  ->  [Signature of checksum file]  ->  [ISO image]

Here are the steps to take:

0. Most important: Make sure to follow this procedure on a computer that is secure and that you fully trust. Otherwise all of the following steps are pretty much useless.

1. Download the CentOS 7 public key:
gpg --search-keys --keyserver-options proxy-server=http://proxy.local.example:8080 F4A80EB5
(or without using a proxy server: gpg --search-keys F4A80EB5)
Accept the key by typing "1". If there was no key found, try using a specific keyserver with the "--keyserver" option". By default gpg uses "".

Make sure the key has really been imported into your public gpg keyring
gpg --fingerprint -k

The "--fingerprint" option shows the fingerprint of the just imported key. Compare it with the fingerprint on the official CentOS website:
Make sure to double check the SSL certificate of that website in your browser.

2. Download the checksum file for the DVD image. It contains checksums for a large variety of CentOS ISO images:

Check the validity of the checksum file:
gpg --verify sha256sum.txt.asc

3. Check the validity of the downloaded ISO image file:
sha256sum -c centos-sha256sum.txt.asc


Upgrade from Ubuntu Desktop 14.04 LTS to 16.04 LTS (KDE desktop)

I just upgraded from Ubuntu Desktop 14.04 LTS to 16.04 LTS. It worked without major problems and didn't take a long time. I am not using the Kubuntu distribution, only the native Ubuntu Desktop version. You can still use KDE as the standard desktop. Here are some notes:

- "do-release-upgrade" didn't work for some reason. It just showed "No new release found". I had to use "do-release-upgrade -p".

- Versions:

  • Kernel 4.4.0-21
  • KDE Framework 5.18.0
  • libvirt 1.3.1
  • virt-manager 1.3.2
  • MySQL 5.7.12
  • Apache 2.4.18
  • ClamAV 0.99
  • OpenSSL 1.0.2g-fips
  • OpenSSH 7.2p2
  • Bacula 7.0.5

- No problems upgrading LVM root partition on LUKS encrypted disk partition.

- Virtual Machine Manager now supports snapshots and cache modes "directsync" and "unsafe" for disk devices. Some options are missing though, like cpu pinning.

- KDE did not work after upgrading and rebooting. I had to install the meta package "kubuntu-desktop" manually, which pulls in all necessary dependencies to run KDE as the standard desktop manager. The display manager "kdm" is now replaced by "sddm", which works great. So the "kdm" package is missing now and no longer part of the default repositories.

You can change the default display manager by editing /etc/X11/default-display-manager or by running "dpkg-reconfigure sddm".

- KDE desktop theme Breeze looks very nice. Take a look here:

- Upstart has been replaced by systemd. Make sure to know some basics about the command line interface "systemctl" before upgrading in case there are problems during the upgrade process.

Typing "systemctl<tab><tab> gives you a list of command line options. Just typing "systemctl" lists all services. The column "SUB" shows you if the service is running or not.

With the switch to systemd, consolekit is no longer required. kubuntu-desktop depends on either systemd or consolekit. As systemd is installed now, you can safely delete all consolekit packages, especially if the package is no longer supported by Ubuntu anyway (e.g. consolekit, libck-connector0). 

- ZFS is part of the standard repositories. You do not have to add any 3rd party repository to try it out.

- Bacula client (bacula-fd 7.0.5) is not compatible with previous version of Bacula server (bacula-director/bacula-sd 5.2.6) on Ubuntu 14.04. Checking the status of the client works in bacula director, but running a job on bacula-fd in debug mode (bacula-fd -c /etc/bacula/bacula-fd.conf -f -d 100) shows the following output:

bacula-fd: job.c:1855-0 StorageCmd: storage address=x.x.x.x port=9103 ssl=0
bacula-fd: bsock.c:208-0 Current x.x.x.x:9103 All x.x.x.x:9103
bacula-fd: bsock.c:137-0 who=Storage daemon host=x.x.x.x port=9103
bacula-fd: bsock.c:310-0 OK connected to server Storage daemon x.x.x.x:9103.
bacula-fd: authenticate.c:237-0 Send to SD: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: authenticate.c:240-0 ==== respond to SD challenge
bacula-fd: cram-md5.c:119-0 cram-get received: authenticate.c:79 Bad Hello command from Director at client: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: cram-md5.c:124-0 Cannot scan received response to challenge: authenticate.c:79 Bad Hello command from Director at client: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: authenticate.c:247-0 cram_respond failed for SD: Storage daemon

It is however quite simple to download and compile the latest 5.2.x version of bacula (5.2.13):

  • systemctl stop bacula-fd
  • Install packages required for building bacula client from source:
    apt-get install build-essentials libssl-dev
  • Download bacula-5.2.13.tar.gz and bacula-5.2.13.tar.gz.sig from
  • Import Bacula Distribution Verification Key and check key fingerprint (fingerprint for my downloaded Bacula key is 2CA9 F510 CA5C CAF6 1AB5  29F5 9E98 BF32 10A7 92AD):
    gpg --recv-keys 10A792AD
    gpg --fingerprint -k 10A792AD
  • Check signature of downloaded files:
    gpg --verify bacula-5.2.13.tar.gz.sig 
  • tar -xzvf bacula-5.2.13.tar.gz
  • cd bacula-5.2.13
  • ./configure --prefix=/usr/local --enable-client-only --disable-build-dird --disable-build-stored --with-openssl --with-pid-dir=/var/run/bacula
  • check output of previous configure command
  • make && make install
  • check output of previous command for any errors
  • create new file /etc/
  • ldconfig
  • edit file /etc/init.d/bacula-fd and change variable DAEMON:
  • systemctl daemon-reload
  • systemctl start bacula-fd

- I experienced a problem with the ntp service. "systemctl start ntp" did not show any error messages, but the ntp service was not running afterwards. There were no suspicious entries in the log files. I had to remove / purge the "upstart" package and then reinstall the package "ntp" to make it work again. ntp does still use the old init-script under "/etc/init.d". Starting the service with the init-script did work, but using "service ntp start" or "systemctl start ntp" did not start the ntp process. It did not even try to run the init-script in "/etc/init.d". Not sure what the real cause for the problem was, but as I said removing upstart and reinstalling ntp fixed the problem.

- Changes in configuration files or software features:

  • New default for /etc/ssh/sshd_config / permit_root_login: "yes" -> "prohibit-password"
    With this default setting, root is no longer able to login to SSH with username/password.
  • chkrootkit is trying to run "ssh -G" which is not working without a hostname (false positive, ignore): 
    "Searching for Linux/Ebury - Operation Windigo ssh...        Possible Linux/Ebury - Operation Windigo installetd"
  • "dpkg-log-summary" shows a history of recent package installations (install, update, remove) 

 - Post-installation task: Remove all packages that you don't need or which are no longer supported by Ubuntu: 

ubuntu-support-status --show-unsupported
  • upstart packages (upstart, libupstart1)
  • unity
  • ubuntu-desktop
  • lightdm
  • anacron (if running Ubuntu on a 24x7 installation)
  • bluez, bluedevil (if you don't need bluetooth)