PAM authentication per application (Debian 8 Jessie)

PAM is the default authentication mechanism in Linux. It is very flexible and powerful, and even allows you to configure different authentication options for each application. In this example we will use the PAM module "pam_listfile" which is already included in the standard package "libpam-modules".

The application name has to match the name of the file under /etc/pam.d . So for example for application "abc" you have the following PAM configuration file /etc/pam.d/abc :

auth required pam_listfile.so onerr=fail item=group sense=allow file=/etc/group.allow
@include common-auth
@include common-account
@include common-password
@include common-session

The first line only authenticates users that are member of any group listed in /etc/group.allow. The contents of /etc/group.allow may only contain a single line and looks like this:

abc_group

This will allow only members of the group "abc_group" to login to application "abc". After adding the new configuration files, make sure to always test your PAM settings with pamtester:

# id bob
uid=1003(bob) gid=1003(bob) groups=1003(bob)
# pamtester abc bob authenticate
Password: 
pamtester: Authentication failure
# usermod -aG abc_group bobpamtester abc bob authenticate
Password: 
pamtester: successfully authenticated

First the user "bob" is not member of the group "abc_group". pamtester fails to authenticate the user even if you provide the right password. Then after adding "bob" to the group "abc_group" authentication succeeds.

 

Share

Connect to OpenLDAP server with Perl

This little code example in Perl shows how to connect to an OpenLDAP server using the ldaps protocol. It tries several servers and uses the first one it can connect to.

#!/usr/bin/perl
use strict;

use Net::LDAP;
use Net::LDAP::Extension::WhoAmI;
# LDAPS is basically the same as the LDAP-Module using ldaps:// URIs
#use Net::LDAPS;

my $userName = 'USERNAME';
my $passWord = 'PASSWORD';

my @Servers = ("server1", "server2", "server3");
my $ldap = undef;

# Code = 34, Name: LDAP_INVALID_DN_SYNTAX (dn is not a full path)
# Code = 48, Name: LDAP_INAPPROPRIATE_AUTH (empty dn or password)
# Code = 49, Name: LDAP_INVALID_CREDENTIALS (wrong dn or password)
sub lErr {
  my $mesg = shift;
  printf STDERR "Error: %s\n", $mesg->error();
  printf STDERR "Error Code: %s\n", $mesg->code();
  printf STDERR "Error Name: %s\n", $mesg->error_name();
  printf STDERR "Error Text: %s", $mesg->error_text();
  printf STDERR "Error Description: %s\n", $mesg->error_desc();
  printf STDERR "Server Error: %s\n", $mesg->server_error();
}

foreach my $server (@Servers) {
  $ldap = Net::LDAP->new("ldaps://$server:636",
  verify => 'require',
  inet4 => 1,
  timeout => 3,
  cafile => '/etc/ssl/certs/ldap_slapd_cacert.pem' );

  if($ldap) {
    print "Ok connecting to $server\n";
    last;
  }
  else {
    print "Error connecting to $server: $@\n";
  }
}

if($ldap) {
  print "Now connected to " . $ldap->host() . "\n";
}
else {
  exit -1;
}
my $mesg = $ldap->bind("uid=$userName,ou=People,dc=example,dc=com",
  password => "$passWord");
if($mesg->is_error()) {
  exit $mesg->code;
}

# Using $ldap->bind again after $ldap->unbind doesn't work
$ldap->unbind;

There is also an option to connect to an array of servers with only one function call. It basically does the same thing: Looping through a list of servers and use the first successful connection. But you have to be careful, there is a known bug if "verify" is set to "optional" (s. https://rt.cpan.org/Public/Bug/Display.html?id=118477).

Share

Certificate Authorities (CA) in Google Chrome / Chromium and Firefox on Linux

Firefox ships with its own set of root CAs ("Builtin Object Token" as the Security Device in advanced preference settings). Here is the list of all root CAs included in Firefox along with their fingerprints:
https://mozillacaprogram.secure.force.com/CA/IncludedCACertificateReport

Builtin root CAs are hardcoded in /usr/lib/firefox/libnssckbi.so . You can see a list of all CAs in Firefox preferences (advanced settings).

CAs marked as "Software Security Device" are usually intermediate certificates that are downloaded from websites and stored locally. These CAs that are not builtin are either stored on a PKCS#11 compatible smartcard attached to your PC/laptop or saved to your home directory:
certutil -d $HOME/.mozilla/firefox/xxx.default -L

Chromium / Google Chrome does not ship with its own CA list but uses the CAs from the underlying operating system:
https://www.chromium.org/Home/chromium-security/root-ca-policy

In Ubuntu 16.04 these CAs are hardcoded in /usr/lib/x86_64-linux-gnu/nss/libnssckbi.so which is part of the package "libnss3". You should therefore update this package as soon as there is an update available to keep your builtin CA list up-to-date.

CAs that are not builtin and that you installed manually are stored in your home directory:
certutil -d sql:$HOME/.pki/nssdb -L

Important things to note:

  • The security of SSL encrypted websites (https://...) depends on the root CA which is used to sign the website certificate. These CAs are stored locally on your device in different locations based on the browser you are using.
  • There are 2 kinds of CAs:
    1. Builtin CAs that ship with your browser or linux installation. They are stored in shared object files. There is probably no easy way to edit this list unless you change the source files and recompile the package. Nevertheless in both browsers you can remove all trust from a builtin certificate which is basically the same as deleting it.
    2. Manually added CAs are stored in your home directory. You can easily edit that list within the settings of the browser or on the command line.
  • Both Firefox and Chromium / Google Chrome use NSS certificate databases to store manually added CAs that are not builtin. But they use different directories. Maybe you could use symbolic links to point both directories to the same database. That way both browsers would be using the same manual CA list.Currently Firefox uses by default the legacy dbm database version (cert8.db, key3.db) and Chromium / Google Chrome uses by default the new SQLite database version (cert9.db, key4.db). There seems to be an environment variable NSS_DEFAULT_DB_TYPE that makes Firefox use the new SQLite database version as well (s. https://wiki.mozilla.org/NSS_Shared_DB_Howto).
  • Neither Firefox nor Chromium / Google Chrome are using CAs from the package "ca-certificates".
Share

How to download Twitter videos (animated GIFs)

There are 2 types of Twitter videos: animated GIFs and real videos. This post is about animated GIFs. They have the text "GIF" printed on them when they are not playing.

To download animated GIFs there doesn't seem to be an easy way in Google Chrome unless you use an extension.

In Firefox:

  • open the tweet
  • right click on the video
  • choose "This Frame" -> "Page Info"
  • Under "Media" choose the mp4-file and click "Save As..."

 

Share

Password encryption in OpenLDAP

Passwords in OpenLDAP are SSHA encrypted by default (salted SHA1).

Changing it to SHA512:

olcPasswordHash: {CRYPT},{SSHA}
olcPasswordCryptSaltFormat: "$6$%.16s"

This will still accept already existing passwords that are SSHA encrypted. New passwords will be SHA512 encrypted.

For this to work, the GNU C library has to support SHA512:
- /etc/login.defs: ENCRYPT_METHOD SHA512
- man pam_unix (should include sha512)

SHA512 passwords for LDAP can be generated with slappasswd:

slappasswd -c '$6$%.16s'

 

Share

Secure download of RHEL ISO installation images

You will probably download the RHEL ISO image from within the Red Hat Customer Portal and therefore use an encrypted HTTPS connection (download URL is https://access.cdn.redhat.com/...). The SHA-256 checksums for the ISO images are on the download page.

Red Hat also provides a page with all GPG keys they use for signing their software packages. In Customer Portal, go to "Security" -> "Product Signing (GPG) Keys)" (https://www.redhat.com/security/team/key/)

There are download links for the public keys (https://www.redhat.com/...). The keys are also available on the keyserver pgp.mit.edu . So you can use the following command to import the main Red Hat key into your GPG keyring:

# gpg --recv-keys fd431d51
# gpg --fingerprint -k fd431d51

Compare the fingerprint of the Red Hat public key with the fingerprint on the Customer Portal website. You cannot use the GPG key for verifying the ISO files, but it is useful for e.g. verifying RPM package updates that you can download directly from Red Hat websites and that are not installed the usual way via an official yum repository.

 

Share

iSCSI connection states in Open-iSCSI

This is the iSCSI connection state if the underlying network interface changes from "UP BROADCAST RUNNING MULTICAST" to "UP BROADCAST MULTICAST".

Log entries showing that the network interface has no longer the state "RUNNING":

Jul 3 14:17:31 host kernel: [974138.571169] bnx2 0000:08:05.0 eth2: NIC Copper Link is Down
Jul 3 23:05:05 host kernel: [1005760.957474] sd 10:0:0:0: rejecting I/O to offline device
... previous message repeats many times ...

Checking iSCSI connection state:

# iscsiadm -m session -P1
...
iSCSI Connection State: TRANSPORT WAIT
iSCSI Session State: FREE
Internal iscsid Session State: REOPEN

Log entry once the network interface state is back to "RUNNING":

Jul 4 06:56:31 host kernel: [1034019.191222] bnx2 0000:08:05.0 eth2: NIC Copper Link is Up, 1000 Mbps full duplex

Checking iSCSI connection state again:

# iscsiadm -m session -P1
...
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

The output of "iscsiadm -m session -P1" can be used for monitoring the iSCSI connection e.g. in a simple Nagios or Icinga Perl script.

Share

HSTS with Apache and Chrome

  • HSTS (HTTP Strict Transport Security) prevents your browser from visiting a website over an unencrypted "http://..." url. Instead you have to use the encrypted "https://..." url, otherwise your browser refuses to load the website.
    Either the webserver of the website you are visiting suggests the use of HSTS to your browser by sending an additional HTTP header, or you manually configure a certain website yourself in your browser.
  • Apache requires the module mod_headers to make the necessary changes to the HTTP headers.
  • Add this to your Apache vhost configuration:
    Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"
    For a description of all options see RFC: https://tools.ietf.org/html/rfc6797#section-6
    The "preload" option is not part of the RFC. It just signals that you want your site to be added to the browser builtin list of HSTS sites (see below). If you do not plan to get listed, you may omit this option.
  • Visit the site at least once using HTTPS in your Chrome browser ("trust on first use"). The HSTS configuration of the site (provided by the Apache STS header) will be added to an internal Chrome list. HSTS really depends on this internal browser list. Webservers only send an additional HTTP header that webbrowsers may or may not honor.
  • Add, delete or check websites in your Chrome browser:
    chrome://net-internals/#hsts
    Changes take place immediately without having to restart Chrome.
    You can add sites even if they don't send the special STS header.
    You can combine those entries with PKP (Public Key Pinning) by providing fingerprints for all accepted public keys of a website.
  • Chrome ships with a builtin list of sites that require HSTS. If you run a large public website, you might want to get included in that list: https://hstspreload.appspot.com/
    These builtin sites get listed as "static_..." in your internal Chrome browser list. All other sites (added manually or by honoring the STS header) get listed as "dynamic_...".
  • You cannot delete site entries from the builtin list (assuming that you use the official Chrome browser and that it has not been manipulated).
  • This is the message you get in Chrome when HSTS is violated on a website (in this case the certificate of www.rolandschnabel.de has expired and therefore Chrome refuses to establish the HTTPS connection):
You cannot visit www.rolandschnabel.de right now because the website uses HSTS. Network errors and attacks are usually temporary, so this page will probably work later.

Important things to note:

  • Even for HSTS enabled sites, you may still be able to type in the "http://..." URL in the browser address bar. Chrome automatically recognizes the URL and redirects you to the corresponding "https://..." URL.
    This is different from traditional HTTP redirects, because no unencrypted traffic is sent over the network. The redirection already takes place in the browser.
    The downside of this behaviour is that it makes it hard for people to identify if a website is using HSTS or simply redirects all traffic from HTTP/port 80 to HTTPS/port 443 (HTTP status codes 3xx).
  • Many browser plugins now offer the same functionality (redirect some or all website addresses to HTTPS URLs).
  • Maybe some day HTTPS URLs become the default in webbrowsers. If you type a URL in the address bar, or select a URL without the leading "http(s)://", the browser first redirects you automatically to the HTTPS URL. Only if there is no connection possible, you will receive a warning message and get redirected to the HTTP URL. Let's make HTTPS the default in browsers and accept HTTP only for a small number of exceptions.
    No green lock icon for SSL encrypted websites, just red unlock icons for unencrypted websites.

 

Share

Secure download of Ubuntu ISO installation images

Please follow the instructions on this page:
https://help.ubuntu.com/community/VerifyIsoHowto

There is another website, but it doesn't use SSL / HTTPS:
http://www.ubuntu.com/download/how-to-verify

The procedure is the same as I have already described for CentOS or Debian in my previous posts:

  1. Import the GPG-key and verify its fingerprint.
  2. Download the checksum file and verify its signature with the GPG-key.
  3. Check the iso file with the checksum file.

Again the fingerprint of the GPG-key is on a SSL encrypted website where you have to check the website certificate and its root CA.

Firefox ships with its own set of root CAs ("Builtin Object Token" as the Security Device in advanced preference settings). Here is a list of all root CAs included in Firefox along with their fingerprints:
https://mozillacaprogram.secure.force.com/CA/IncludedCACertificateReport

Builtin root CAs are hardcoded in /usr/lib/firefox/libnssckbi.so

CAs marked as "Software Security Device" are usually intermediate certificates that are downloaded from websites and stored locally. These CAs that are not builtin are either stored on a PKCS#11 compatible smartcard attached to your PC/laptop or saved to your home directory:
certutil -d ~/.mozilla/firefox/xxx.default -L

Chromium / Google Chrome does not ship with its own CA list but uses the CAs from the underlying operating system:
https://www.chromium.org/Home/chromium-security/root-ca-policy

On Ubuntu 16.04 these CAs are hardcoded in /usr/lib/x86_64-linux-gnu/nss/libnssckbi.so which is part of the package "libnss3".

Important things to note:

  • Verification of ISO images is based on GPG-keys which have to be checked by its fingerprints. You can get that fingerprint from a SSL secured website.
  • The security of a website depends on the root CA which is used to sign the website certificate. These CAs are stored locally in different locations based on the browser you are using.
  • Neither Firefox nor Chromium / Google Chrome are using CAs from the package "ca-certificates".
Share

Buying a used SATA disk

With the evolution of SSD drives, people are selling their old magnetic disks on Ebay or other platforms really cheap. Here are some steps to take after plugging in a bought SATA drive into your Linux system.

Keep in mind that all disk information probably can be manipulated, including model name, serial number, firmware, etc.

Check general drive information

# hdparm -I /dev/sdx
- Model Number
- Serial Number (to identify physical drive e.g. in case of replacement)
- Nominal Media Rotation Rate
- DMA: udma6
- Write cache
- SMART error logging
- SMART self-test
- SCT Error Recovery Control (if used in a RAID array)
- Security: Passwort not enabled/locked
(Enabled features are preceded by *)

Check SMART capabilities

# smartctl -i /dev/sdx
- SMART support is: Availabe
- SMART support is: Enabled (if not, enable it with "smartctl -s on /dev/sdx")

Check detailed SMART information

# smartctl -a /dev/sdx
- Model Family/Device Model
- User Capacity
- Rotation Rate
- SATA Version: current speed
- Vendor specific SMART attributes:
o Start_Stop_Count
    (Usually the same as Power_Cycle_Count)
o Reallocated_Sector_Ct
    (Bad sectors that have been marked by the disk?)
o Power_On_Hours
    (Disk has been used 24/7 as a NAS drive?)
o Power_Cycle_Count
    (Usually the same as Start_Stop_Count)
o G-Sense_Error_Rate
   (Disk has been dropped on the floor?)
o Load_Cycle_Count
   (Usually the same as Start_Stop_Count and Power_Cycle_Count)
o Temperature_Celsius
- SMART Error Log (Are there any entries?)
- SMART Self-test (Anything other than "Completed without error")

Temperature history and SCT

# smartctl -x /dev/sdx
- Temperature history
- SCT Error Recovery Control
    (Only important for use in RAID arrays, see one of my previous posts)

SMART tests

SMART tests do not degrade drive performance, they are more like collecting statistical data from the drive. Online and offline tests can be executed during normal operation.

# smartctl -t long /dev/sdx
Expected output:

root@linux:~# smartctl -t long /dev/sdx

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 103 minutes for test to complete.
Test will complete after Fri May 20 09:48:56 2016

Use smartctl -X to abort test.

Check test result in drive logs:

# smartctl -l selftest /dev/sdx
Expected output:

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 37143 -

# smartctl -l error /dev/sdx
Expected output:

=== START OF READ SMART DATA SECTION ===
SMART Error Log Version: 1
No Errors Logged

Conclusion

So what are you doing if some of the values are not looking right? Don't worry. The drive might still be working without problems for one or two years. But you should have an eye on it:

  • Run regular SMART tests to see if error rate / reallocated sector count increases.
  • Don't use the drive as a sole disk medium for critical high performance data. Maybe make it part of a RAID 1 or RAID 6 array or use it as a hotspare / cold standby drive. Even then you should run regular SMART tests on the drive.
  • Make sure that the temperature is not too high (should be somewhere around 40 degrees celcius and is dependent on the drive).
  • Minimize power cycles. Using a worn out disk on a PC or laptop that gets rebooted a couple of times every day is not a good idea.
  • If you can afford it, use more file system and RAID caching to minimize disk reads and writes. RAID controllers usually support writethrough and writeback. While writeback minimizes disk writes, it should only be used on battery backed or flash backed RAID controllers. Don't use software RAID or fake RAID controllers.
  • On Linux there is a tool called iotop to identify processes with heavy read/write operations. Reconfigure your system to use different disks.
  • Run the disk for a couple of days without any important data on it. Check SMART values and see if you are comfortable with it.
  • Make frequent backups of your data to prepare for disk failure. Don't use the drive as a backup medium.
  • Don't use the drive in high availability environments (with or without RAID). If you use the drive in your laptop or a PC without RAID, make sure to have a spare drive at hand and make daily backups.

Why should I go through all this and not buy a new drive? At least its more reliable and the higher price will pay off.

Even new drives might fail within a couple of days or weeks without any prior signs. There is no guarantee that a drive - new or old - will not stop working from one second to the next. Of course older drives are more likely to fail than new drives (see MTBF / load/unload cycles / power-on hours / warranty duration of your drive specification data-sheet).

But if you prepare carefully for disk failures and minimize the risk, you can save some money and spend it for that brand new SSD drive that will be out on the market in one year. SSD technology is progressing rapidly and it might be worth waiting for the prices to drop.

Share