Category Archives: Intermediate

Secure download of RHEL ISO installation images

You will probably download the RHEL ISO image from within the Red Hat Customer Portal and therefore use an encrypted HTTPS connection (download URL is https://access.cdn.redhat.com/...). The SHA-256 checksums for the ISO images are on the download page.

Red Hat also provides a page with all GPG keys they use for signing their software packages. In Customer Portal, go to "Security" -> "Product Signing (GPG) Keys)" (https://www.redhat.com/security/team/key/)

There are download links for the public keys (https://www.redhat.com/...). The keys are also available on the keyserver pgp.mit.edu . So you can use the following command to import the main Red Hat key into your GPG keyring:

# gpg --recv-keys fd431d51
# gpg --fingerprint -k fd431d51

Compare the fingerprint of the Red Hat public key with the fingerprint on the Customer Portal website. You cannot use the GPG key for verifying the ISO files, but it is useful for e.g. verifying RPM package updates that you can download directly from Red Hat websites and that are not installed the usual way via an official yum repository.

 

Share

HSTS with Apache and Chrome

  • HSTS (HTTP Strict Transport Security) prevents your browser from visiting a website over an unencrypted "http://..." url. Instead you have to use the encrypted "https://..." url, otherwise your browser refuses to load the website.
    Either the webserver of the website you are visiting suggests the use of HSTS to your browser by sending an additional HTTP header, or you manually configure a certain website yourself in your browser.
  • Apache requires the module mod_headers to make the necessary changes to the HTTP headers.
  • Add this to your Apache vhost configuration:
    Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    For a description of all options see RFC: https://tools.ietf.org/html/rfc6797#section-6
    The "preload" option is not part of the RFC. It just signals that you want your site to be added to the browser builtin list of HSTS sites (see below). If you do not plan to get listed, you may omit this option.
  • Visit the site at least once using HTTPS in your Chrome browser ("trust on first use"). The HSTS configuration of the site (provided by the Apache STS header) will be added to an internal Chrome list. HSTS really depends on this internal browser list. Webservers only send an additional HTTP header that webbrowsers may or may not honor.
  • Add, delete or check websites in your Chrome browser:
    chrome://net-internals/#hsts
    Changes take place immediately without having to restart Chrome.
    You can add sites even if they don't send the special STS header.
    You can combine those entries with PKP (Public Key Pinning) by providing fingerprints for all accepted public keys of a website.
  • Chrome ships with a builtin list of sites that require HSTS. If you run a large public website, you might want to get included in that list: https://hstspreload.appspot.com/
    These builtin sites get listed as "static_..." in your internal Chrome browser list. All other sites (added manually or by honoring the STS header) get listed as "dynamic_...".
  • You cannot delete site entries from the builtin list (assuming that you use the official Chrome browser and that it has not been manipulated).
  • This is the message you get in Chrome when HSTS is violated on a website (in this case the certificate of www.rolandschnabel.de has expired and therefore Chrome refuses to establish the HTTPS connection):
You cannot visit www.rolandschnabel.de right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later.

Important things to note:

  • Even for HSTS enabled sites, you may still be able to type in the "http://..." URL in the browser address bar. Chrome automatically recognizes the URL and redirects you to the corresponding "https://..." URL.
    This is different from traditional HTTP redirects, because no unencrypted traffic is sent over the network. The redirection already takes place in the browser.
    The downside of this behaviour is that it makes it hard for people to identify if a website is using HSTS or simply redirects all traffic from HTTP/port 80 to HTTPS/port 443 (HTTP status codes 3xx).
  • Many browser plugins now offer the same functionality (redirect some or all website addresses to HTTPS URLs).
  • Maybe some day HTTPS URLs become the default in webbrowsers. If you type a URL in the address bar, or select a URL without the leading "http(s)://", the browser first redirects you automatically to the HTTPS URL. Only if there is no connection possible, you will receive a warning message and get redirected to the HTTP URL. Let's make HTTPS the default in browsers and accept HTTP only for a small number of exceptions.
    No green lock icon for SSL encrypted websites, just red unlock icons for unencrypted websites.

 

Share

Secure download of Ubuntu ISO installation images

Please follow the instructions on this page:
https://help.ubuntu.com/community/VerifyIsoHowto

There is another website, but it doesn't use SSL / HTTPS:
http://www.ubuntu.com/download/how-to-verify

The procedure is the same as I have already described for CentOS or Debian in my previous posts:

  1. Import the GPG-key and verify its fingerprint.
  2. Download the checksum file and verify its signature with the GPG-key.
  3. Check the iso file with the checksum file.

Again the fingerprint of the GPG-key is on a SSL encrypted website where you have to check the website certificate and its root CA.

Firefox ships with its own set of root CAs ("Builtin Object Token" as the Security Device in advanced preference settings). Here is a list of all root CAs included in Firefox along with their fingerprints:
https://mozillacaprogram.secure.force.com/CA/IncludedCACertificateReport

Builtin root CAs are hardcoded in /usr/lib/firefox/libnssckbi.so

CAs marked as "Software Security Device" are usually intermediate certificates that are downloaded from websites and stored locally. These CAs that are not builtin are either stored on a PKCS#11 compatible smartcard attached to your PC/laptop or saved to your home directory:
certutil -d ~/.mozilla/firefox/xxx.default -L

Chromium / Google Chrome does not ship with its own CA list but uses the CAs from the underlying operating system:
https://www.chromium.org/Home/chromium-security/root-ca-policy

On Ubuntu 16.04 these CAs are hardcoded in /usr/lib/x86_64-linux-gnu/nss/libnssckbi.so which is part of the package "libnss3".

Important things to note:

  • Verification of ISO images is based on GPG-keys which have to be checked by its fingerprints. You can get that fingerprint from a SSL secured website.
  • The security of a website depends on the root CA which is used to sign the website certificate. These CAs are stored locally in different locations based on the browser you are using.
  • Neither Firefox nor Chromium / Google Chrome are using CAs from the package "ca-certificates".
Share

Secure download for CentOS 7

The basic idea  for downloading a CentOS 7 installation image in a secure way is this:

  1. Download the CentOS public key from a public keyserver.
  2. By using that key you can verify the signature of the checksum file of the CentOS ISO image.
  3. With the checksum file you check the downloaded ISO image to see if it is the original file and has not been changed or tampered with.
[CentOS Public Key]  ->  [Signature of checksum file]  ->  [ISO image]

Here are the steps to take:

0. Most important: Make sure to follow this procedure on a computer that is secure and that you fully trust. Otherwise all of the following steps are pretty much useless.

1. Download the CentOS 7 public key:
gpg --search-keys --keyserver-options proxy-server=http://proxy.local.example:8080 F4A80EB5
(or without using a proxy server: gpg --search-keys F4A80EB5)
Accept the key by typing "1". If there was no key found, try using a specific keyserver with the "--keyserver" option". By default gpg uses "keys.gnupg.net".

Make sure the key has really been imported into your public gpg keyring
gpg --fingerprint -k

The "--fingerprint" option shows the fingerprint of the just imported key. Compare it with the fingerprint on the official CentOS website: https://www.centos.org/keys/
Make sure to double check the SSL certificate of that website in your browser.

2. Download the checksum file for the DVD image. It contains checksums for a large variety of CentOS ISO images:
wget http://buildlogs.centos.org/rolling/7/isos/x86_64/sha256sum.txt.asc

Check the validity of the checksum file:
gpg --verify sha256sum.txt.asc

3. Check the validity of the downloaded ISO image file:
sha256sum -c centos-sha256sum.txt.asc

Share

Upgrade from Ubuntu Desktop 14.04 LTS to 16.04 LTS (KDE desktop)

I just upgraded from Ubuntu Desktop 14.04 LTS to 16.04 LTS. It worked without major problems and didn't take a long time. I am not using the Kubuntu distribution, only the native Ubuntu Desktop version. You can still use KDE as the standard desktop. Here are some notes:

- "do-release-upgrade" didn't work for some reason. It just showed "No new release found". I had to use "do-release-upgrade -p".

- Versions:

  • Kernel 4.4.0-21
  • KDE Framework 5.18.0
  • libvirt 1.3.1
  • virt-manager 1.3.2
  • MySQL 5.7.12
  • Apache 2.4.18
  • ClamAV 0.99
  • OpenSSL 1.0.2g-fips
  • OpenSSH 7.2p2
  • Bacula 7.0.5
    http://www.bacula.org/9.0.x-manuals/en/main/New_Features_in_7_0_0.html

- No problems upgrading LVM root partition on LUKS encrypted disk partition.

- Virtual Machine Manager now supports snapshots and cache modes "directsync" and "unsafe" for disk devices. Some options are missing though, like cpu pinning.

- KDE did not work after upgrading and rebooting. I had to install the meta package "kubuntu-desktop" manually, which pulls in all necessary dependencies to run KDE as the standard desktop manager. The display manager "kdm" is now replaced by "sddm", which works great. So the "kdm" package is missing now and no longer part of the default repositories.

You can change the default display manager by editing /etc/X11/default-display-manager or by running "dpkg-reconfigure sddm".

- KDE desktop theme Breeze looks very nice. Take a look here:
http://kde-look.org/content/show.php/Elegant+Breeze?content=166630

- Upstart has been replaced by systemd. Make sure to know some basics about the command line interface "systemctl" before upgrading in case there are problems during the upgrade process.

Typing "systemctl<tab><tab> gives you a list of command line options. Just typing "systemctl" lists all services. The column "SUB" shows you if the service is running or not.

With the switch to systemd, consolekit is no longer required. kubuntu-desktop depends on either systemd or consolekit. As systemd is installed now, you can safely delete all consolekit packages, especially if the package is no longer supported by Ubuntu anyway (e.g. consolekit, libck-connector0).

- ZFS is part of the standard repositories. You do not have to add any 3rd party repository to try it out.

- Bacula client (bacula-fd 7.0.5) is not compatible with previous version of Bacula server (bacula-director/bacula-sd 5.2.6) on Ubuntu 14.04. Checking the status of the client works in bacula director, but running a job on bacula-fd in debug mode (bacula-fd -c /etc/bacula/bacula-fd.conf -f -d 100) shows the following output:

bacula-fd: job.c:1855-0 StorageCmd: storage address=x.x.x.x port=9103 ssl=0
bacula-fd: bsock.c:208-0 Current x.x.x.x:9103 All x.x.x.x:9103 
bacula-fd: bsock.c:137-0 who=Storage daemon host=x.x.x.x port=9103
bacula-fd: bsock.c:310-0 OK connected to server Storage daemon x.x.x.x:9103.
bacula-fd: authenticate.c:237-0 Send to SD: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: authenticate.c:240-0 ==== respond to SD challenge
bacula-fd: cram-md5.c:119-0 cram-get received: authenticate.c:79 Bad Hello command from Director at client: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: cram-md5.c:124-0 Cannot scan received response to challenge: authenticate.c:79 Bad Hello command from Director at client: Hello Bacula SD: Start Job bacula-data.2016-05-29_07.53.26_05 5
bacula-fd: authenticate.c:247-0 cram_respond failed for SD: Storage daemon

It is however quite simple to download and compile the latest 5.2.x version of bacula (5.2.13):

  • systemctl stop bacula-fd
  • Install packages required for building bacula client from source:
    apt-get install build-essentials libssl-dev
  • Download bacula-5.2.13.tar.gz and bacula-5.2.13.tar.gz.sig from https://sourceforge.net/projects/bacula/files/bacula/5.2.13/
  • Import Bacula Distribution Verification Key and check key fingerprint (fingerprint for my downloaded Bacula key is 2CA9 F510 CA5C CAF6 1AB5  29F5 9E98 BF32 10A7 92AD):
    gpg --recv-keys 10A792AD
    gpg --fingerprint -k 10A792AD
  • Check signature of downloaded files:
    gpg --verify bacula-5.2.13.tar.gz.sig
  • tar -xzvf bacula-5.2.13.tar.gz
  • cd bacula-5.2.13
  • ./configure --prefix=/usr/local --enable-client-only --disable-build-dird --disable-build-stored --with-openssl --with-pid-dir=/var/run/bacula
  • check output of previous configure command
  • make && make install
  • check output of previous command for any errors
  • create new file /etc/ld.so.conf.d/local.conf:
    /usr/local/lib
  • ldconfig
  • edit file /etc/init.d/bacula-fd and change variable DAEMON:
    DAEMON=/usr/local/sbin/bacula-fd
  • systemctl daemon-reload
  • systemctl start bacula-fd

- I experienced a problem with the ntp service. "systemctl start ntp" did not show any error messages, but the ntp service was not running afterwards. There were no suspicious entries in the log files. I had to remove / purge the "upstart" package and then reinstall the package "ntp" to make it work again. ntp does still use the old init-script under "/etc/init.d". Starting the service with the init-script did work, but using "service ntp start" or "systemctl start ntp" did not start the ntp process. It did not even try to run the init-script in "/etc/init.d". Not sure what the real cause for the problem was, but as I said removing upstart and reinstalling ntp fixed the problem.

- Changes in configuration files or software features:

  • New default for /etc/ssh/sshd_config / permit_root_login: "yes" -> "prohibit-password"
    With this default setting, root is no longer able to login to SSH with username/password.
  • chkrootkit is trying to run "ssh -G" which is not working without a hostname (false positive, ignore):
    "Searching for Linux/Ebury - Operation Windigo ssh...        Possible Linux/Ebury - Operation Windigo installetd"
  • "dpkg-log-summary" shows a history of recent package installations (install, update, remove)

- Post-installation task: Remove all packages that you don't need or which are no longer supported by Ubuntu:

ubuntu-support-status --show-unsupported
  • upstart packages (upstart, libupstart1)
  • unity
  • ubuntu-desktop
  • lightdm
  • anacron (if running Ubuntu on a 24x7 installation)
  • bluez, bluedevil (if you don't need bluetooth)
Share

Squid performance

If there are only low cache hit rates, you can disable disk caching completely. Comment all cache_dir entries in squid.conf:

# cache_dir ...

Check with "squidclient mgr:info":
...
0 on-disk objects

(Why running squid without disk caching? See one of my next posts.)

 

Enable SMP mode and set CPU affinity, e.g. if you have 2 CPU cores:

workers 2
cpu_affinity_map process_numbers=1,2 cores=1,2

Check with "ps aux | grep squid":
root 21107 0.0 0.5 344768 5196 ? Ss Apr16 0:00 /usr/sbin/squid3 -YC -f /etc/squid3/squid.conf
proxy 21110 0.0 2.3 384584 24060 ? S Apr16 0:04 (squid-coord-3) -YC -f /etc/squid3/squid.conf
proxy 21111 0.0 2.5 387348 26536 ? S Apr16 0:13 (squid-2) -YC -f /etc/squid3/squid.conf
proxy 21112 0.2 3.2 391764 33024 ? S Apr16 2:06 (squid-1) -YC -f /etc/squid3/squid.conf

Notice the new worker processes "squid-1"  and "squid-2", and the new io process "squid-coord-3".

 

Share

Postfix help on command line

I just tried to get some information on the postfix website, but I can not access it. The website seems to be down. So I take this opportunity to write about how to get postfix help on the command line:

Show a list of all postfix man pages in the postfix section:
# apropos -s 8postfix ".*"

Show all man pages for a specific configuration entry. In this case we are searching for all man pages that provide information about configurations entries that contain the string "queue_lifetime":
# man -s 8postfix -wK queue_lifetime

Show a brief description of all possible postfix configuration settings:
# man 5 postconf

Show postfix templates:
# postconf -t

Share

Unattended-Upgrades with Debian Jessie 8

The package "unattended-upgrades" comes pretty handy for automatic security updates. Nevertheless there are a couple of settings you have to be aware of.

First you have to install the package "unattended-upgrades" of course. It installs a binary called "unattended-upgrades" which is run daily by the apt cron job:
/etc/cron.daily/apt

After that you have to not only enable unattended-upgrades but also the periodic update of your packages . Create the following file:

/etc/apt/apt.conf.d/02periodic

APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Update-Package-Lists "1";

"1" means updating and running unattended-upgrades every day.

There are a couple of more options. Take a look at the following files:

/etc/cron.daily/apt
/etc/apt/apt.conf.d/50unattended-upgrades

Share

MySQL error "Incorrect key file for table"

Usually this error can be corrected by running "myisamchk -r <tablename.MYI>" on the myisam table in question. But this time it didn't resolve the error.

Let's take a close look at the error message:

Incorrect key file for table '/tmp/#sql_5b1f_0.MYI'; try to repair it [126]

The location of the myisam file is really strange. It is in the /tmp folder and not in the database directory. Also the name of the table looks like it is not part of the original database.

If this happens mysql is trying to create a temporary table in the tmp folder to resolve a very large query. It is using temporary tables to store intermediate results for large queries. But if there is not enough space in the tmp folder to create the corresponding key files, MySQL throws the above error message. Which is - in this case - a little bit misleading.

To resolve the problem I just had to enlarge my /tmp partition. Originally it was just 350 MB in size, which is really a bad idea for a database server. So I gave it another 800 MB, and the error message disappeared.

Share

Secure download for Debian

The basic idea  for downloading the Debian installation image in a secure way is this:

  1. Download the Debian public key from keyring.debian.org.
  2. Using that key you verify the signature of checksum file of the ISO image.
  3. With the checksum file you check the downloaded iso image to see if it is the original file and has not been changed or tampered with.

Here are the steps to take:

0. Most important: Make sure to follow this procedure on a computer that is secure and that you fully trust. Otherwise all of the following steps are pretty much useless.

1. Download the Debian public key:
gpg --keyserver hkp://keyring.debian.org --recv-keys 6294BE9B

Make sure the key has really been imported in your public gpg keyring:
gpg --fingerprint -k

The "--fingerprint" option shows the fingerprint of the just imported key. Compare it with the fingerprint on the official Debian website: https://www.debian.org/CD/verify
Make sure to double check the SSL certificate of that website in your browser.

2. Download the checksum file for the ISO image and the corresponding signature file:
wget http://cdimage.debian.org/.../SHA512SUMS
wget http://cdimage.debian.org/.../SHA512SUMS.sign

Check the validity of the checksum file:
gpg --verify SHA512SUMS.sign

3. Check the validity of the downloaded ISO image file:
sha512sum -c SHA512SUMS

Share