Monthly Archives: December 2015

Seagate 3,5" disk drives

You clearly see that SAS is only used for the highest performance requirements. All other fields of operation are taken over by SATA, which is equal in reliability and enterprise features. If you don't need that extra bit of performance, you should consider using SATA drives especially for NAS or cloud storage. SAS drives offer less capacity and are much more expensive (TCO, not counting IOPS).

If you are interested in SSDs, consider the following:
- Compare DWPD (Drive Writes Per Day) to warranty period
- Compare TBW (Terra Bytes Written) to drive size
- Match your workload to endurance option, there are huge price differences

Desktop
http://www.seagate.com/www-content/datasheets/pdfs/desktop-hdd-8tbDS1770-7-1511US-en_US.pdf
up to 8 TB
16-256 MB cache
144-220 MB/s
300,000 load cycles
4.0-7.2 W power in idle mode
2 years warranty

Desktop SSHD
http://www.seagate.com/www-content/product-content/barracuda-fam/desktop-sshd/en-us/docs/desktop-sshd-data-sheet-ds1788-2-1308us.pdf
up to 4 TB
64 MB cache
180-210 MB/s
300,000 load cycles
ca. 6.2 W power in idle mode
3 years warranty

NAS
http://www.seagate.com/www-content/datasheets/pdfs/nas-hddDS1789-5A-1511US-en_US.pdf
up to 6 TB
64-128 MB cache
180-216 MB/s
1,000,000 hours MTBF
2.5-7.2 W power in idle mode
3 years warranty

Enterprise NAS
http://www.seagate.com/www-content/product-content/enterprise-hdd-fam/enterprisenas-hdd/_shared/docs/ent-nas-hdd-ds1841-3-1507us.pdf
up to 8 TB
7200 RPM
128-256 MB cache
216-230 MB/s
1,200,000 hours MTBF
4.5-6.9 W power in idle mode
5 years warranty

Archive
http://www.seagate.com/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/archive-hdd-ds1834-4-1412us.pdf
up to 8 TB
128 MB cache
180-190 MB/s
300,000 load cycles
800,000 hours MTBF
3.5-5 W power in idle mode
3 years warranty

Terascale (scale-out storage)
http://www.seagate.com/www-content/product-content/constellation-fam/constellation-cs/en-us/docs/terascale-hdd-data-sheet-ds1793-1-1306us.pdf
4 TB
64 MB cache
140-170 MB/s
300,000 load cycles
800,000 hours MTBF
4,59 W power in idle mode
3 years warranty

Kinetic (ethernet interface, object data storage API)
http://www.seagate.com/www-content/product-content/hdd-fam/kinetic-hdd/_shared/docs/kinetic-ds1835-2-1503us.pdf
4 TB
64 MB cache
60 MB/s
800,000 hours MTBF
300,000 load cycles
3 years warranty

Enterprise Capacity (SAS + SATA)
http://www.seagate.com/www-content/product-content/enterprise-hdd-fam/enterprise-capacity-3-5-hdd/constellation-es-4/en-us/docs/ent-capacity-3-5-hdd-8tb-ds1863-1-1508us.pdf
8 TB
7200 RPM
256 MB cache
up to 237 MB/s
2,000,000 hours MTBF
0.44% AFR
9 W power in idle mode
5 years warranty

Enterprise 10K (SAS)
http://www.seagate.com/www-content/product-content/enterprise-performance-savvio-fam/enterprise-performance-10k-hdd/ent-perf-10k-v8/en-us/docs/ent-performance-10k-hdd-ds1785-4c-1505us.pdf
up to 1.8 TB
10,000 RPM
128 MB cache
108-241 MB/s
0.44% AFR
3.88-4.55 W power in idle mode
5 years warranty

Enterprise 15K (SAS)
http://www.seagate.com/www-content/product-content/enterprise-performance-savvio-fam/enterprise-performance-15k-hdd/ent-perf-15k-5/en-us/docs/enterprise-performance-15k-hdd-ds1797-5c-1504us.pdf
up to 600 GB
15,000 RPM
128 MB cache
160-250 MB/s
0.44% AFR
4.8-5.3 W power in idle mode
5 years warranty

Share

Western Digital 3,5" disk drives

Because of its inexpensive price, WD drives are very well suitable for SOHO markets. Most of them have a SATA interface unless otherwise noted. SAS drives usually consume more power, but despite of having a smaller cache they range at the upper scale of performance compared to similar SATA drives.

Caviar Green (cool, quiet, decreased power)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701229.pdf
up to 3 TB
110 MB/s (123-150 MB/s for *ZRX/*ZDX models)
Intellipower, so no fixed rotational speed (RPM)
64 MB cache
300,000 load cycles
2.1-5.5 W power in idle mode (less power for *ZRX/*ZDX models)
2 years warranty

Green (cool, quiet, decreased power)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771438.pdf
up to 4 TB
ca. 150 MB/s
Intellipower, so no fixed rotational speed (RPM)
64 MB cache
300,000 load cycles
2.5-3.3 W power in idle mode
2 years warranty

Caviar Blue (standard desktop)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701277.pdf
up to 1 TB
126-150 MB/s
7200 RPM
8-64 MB cache
300,000 load cycles
4.9-6.1 W power in idle mode
2 years warranty

Blue (standard desktop, energy efficient for non *X models)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771436.pdf
up to 6 TB
126-175 MB/s (>= 147 MB/s for *Z?? models)
5400 RPM (7200 RPM for *X models)
16-64 MB cache
300,000 load cycles
2.5-3.4 W power in idle mode (>= 4.9 W for *X models)
2 years warranty

Caviar Black (desktop performance)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701276.pdf
up to 2 TB
126-150 MB/s
7200 RPM
64 MB cache (32 MB for *LX model)
300,000 load cycles
5.6-8.2 W power in idle mode
5 years warranty

Black (desktop high performance)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771434.pdf
up to 6 TB
150-218 MB/s
7200 RPM
64-128 MB cache
300,000 load cycles
6.1- 7.6 W power in idle mode (8.1 W for *FZEX models)
5 years warranty

Red / Red Pro (NAS Storage, *CX models are 2,5", *FF?? are Pro models and faster)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800002.pdf
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800022.pdf
up to 6 TB (up to 1 TB for *CX models)
147-214 MB/s (144 MB/s for *CX models)
Intellipower, so no fixed rotational speed (7200 RPM for *FF?? Pro models)
64-128 MB cache (16 MB for *CX models)
600,000 load cycles
1,000,000 hours MTBF
2.3-3.4 W power in idle mode (0.6 W for *CX models, >= 4.9 W for *FF?? Pro models)
3 years warranty (5 years warranty for Pro models)

Caviar Re (RAID Edition with PATA interface)
http://support.wdc.com/product.aspx?ID=504&lang=en

Re (RAID Edition)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800044.pdf
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800066.pdf
SAS: http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771386.pdf
up to 6 TB
128-225 MB/s
7200 RPM
32-128 MB cache
600,000 load cycles
1,200,000-2,000,000 hours MTBF
0.63% AFR
4.4-9.2 W power in idle mode
5 years warranty

Se (Datacenter capacity, increased reliablity)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800042.pdf
up to 6 TB
164-214 MB/s
7200 RPM
64-128 MB cache
300,000 load cycles
800,000-1,200,000 hours MTBF
4.6-8.1 W power in idle mode
5 years warranty

Ae (Datacenter archive, spin-down capability for cold data)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-800045.pdf
6 TB
> 150 MB/s
5760 RPM
64 MB cache
300,000 load cycles
500,000 MTBF
4.8 W power in idle mode
3 years warranty

Xe (Datacenter, SAS)
http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-771463.pdf
up to 900 GB
204 MB/s
10.000 RPM
32 MB cache
600,000 load cycles
2,000,000 hours MTBF
5.2 W power in idle mode
5 years warranty

 

Share

SCT Error Recovery Control in RAID drives

SCT ERC (Smart Command Transfer Error Recovery Control) controls how much time a drive spends trying to fix read/write errors for defect sectors. After that time has expired, the drive just gives up on fixing the problem itself and reports a read/write failure to the RAID controller. This prevents the RAID array from being degraded just because one drive has a single defect sector. RAID recovery might take a long time and stresses all remaining drives.

Linux's mdraid handles the ERC timeout as follows:
- Read missing data from other RAID devices
- Overwrite bad block
- Reread bad block
If overwrite or reread of bad block fails again, then finally the drive will be disabled and the array will be degraded.

Hard drive manufacturers have different names for this error recovery feature:
- Western Digital: TLER (For WD Re drives, this feature cannot be disabled, and timeout is fixed to 7 seconds, s. here http://support.wdc.com/KnowledgeBase/answer.aspx?ID=1478. For WD Red drives, this feature can be configured.)
- Seagate: ERC (e.g. for Barracuda ES and ES.2 family SATA enterprise drives, s. here http://knowledge.seagate.com/articles/en_US/FAQ/203991en?language=en_US)
- Samsung, Hitachi: CCTL

The drive's timeout should be lower than the RAID controller timeout. Check the current timeout of your disk drive:

$ smartctl -l scterc /dev/sda
...
SCT Error Recovery Control command not supported
(If ERC is not supported by the drive, it might be a cheap desktop model.)

Set disk read and write timeout to 20 seconds:

$ smartctl -l scterc,200,200 /dev/sda

Check mdraid controller timeout of Linux's software raid:

$ cat /sys/block/sda/device/timeout
30

 

Share

Open Source Press is closing

Open Source Press is closing down by the end of the year. So this is your last chance to buy some of their books or ebooks from their website. You will get them from Amazon or buecher.de for a while longer, but eventually they will run out of stock too.

I own 13 of their printed books, and they are all of good quality. They offer ebooks too. Some (e)books are in English, but most are German.

Merry christmas and a happy new year.

 

Share

Randomness in KVM virtual guests

1.)
Virtual guests have viewer entropy than physical machines because of viewer hardware events.

$ cat /proc/sys/kernel/random/entropy_avail
158

2.)
Still /dev/random and /dev/urandom work as expected. /dev/random blocks more often if used within a virtual guest with tools like ssh-keygen, but produces reliable results. If you are dependent on /dev/random to work faster, try to switch to /dev/urandom, or use the hosts hardware RNG (s. virtio-rnd, http://http://wiki.qemu-project.org/Features-Done/VirtIORNG).

Despite its reputation, /dev/urandom is as reliable as /dev/random (s. http://www.2uo.de/myths-about-urandom), even with low entropy. If in doubt, test it with rngtest (found on Debian Jessie 8 in package "rng-tools").

$ cat /dev/urandom | rngtest -c 5000
...
rngtest: FIPS 140-2 successes: 4996
rngtest: FIPS 140-2 failures: 4
...

This failure rate is acceptable for regular use of KVM guests.

3.)
Be careful not to exclusively use hardware RNGs. See current discussion about the use of Intel's RDRAND in linux kernel and openssl versions.

Because of concerns about the ability to audit this new Intel processor feature introduced with the Ivy Bridge CPU architecture, support was dropped in openssl completely. Openssl version 1.0.2 adds support of RDRAND again, but only in combination with other entropy sources.

That's how the current linux kernel works as well (XOR operation of RDRAND data with other entropy sources, s. https://www.change.org/p/linus-torvalds-remove-rdrand-from-dev-random-4/responses/9066).

4.)
Software RNGs are at least debatable. The most famous one is probably haveged. See discussion here: https://lwn.net/Articles/525459 . If you want to use haveged, use it only in combination with rngd, as rngd combines it with other entropy sources.

5.)
So far there is no need to change anything about RNGs in KVM guests. The only thing to worry about is the seeding of /dev/urandom at system startup. In most modern linux distributions, /dev/urandom is seeded with random numbers at system startup. The seed is produced by a system service and stored on disk during system shutdown.

In Debian (starting from Debian 8 Jessie) this service is called "systemd-random-seed" and the disk file for the seed is "/var/lib/systemd/random-seed".

6.)
Starting with kernel 4.8, a new DRBG algorithm is used to fill /dev/urandom: ChaCha20
ChaCha20 is supposed to work better and faster than the previous DRBG SP800-90A. Yet another reason to use /dev/urandom. Debian 9 Stretch e.g. uses kernel 4.9 and therefore ChaCha20.

The flow for /dev/urandom seems to be something like this:
external entropy sources (physical)  ->  entropy pool  ->  DRBG  ->  /dev/urandom

Source: https://lwn.net/Articles/686033/

Important things to note:

Whenever you clone a KVM guest or make a snapshot, the random seed file stays the same. Thus the initial seeding of /dev/urandom does not change making its output more predictable. To avoid this problem, you might want to restart the service a couple of times before or after cloning a guest or making a snapshot.

Share

Check sasl authentication with Postfix

Create base64 encrypted username and password:

$ echo -ne '\000username\000password' | openssl base64
$ AHl5eQB4eHg=

Start TLS session with mailserver:

$ openssl s_client -connect mailserver:25 -starttls smtp
...
---
250 DSN
ehlo test
250-mail.localhost.de
250-PIPELINING
250-SIZE 20480000
250-ETRN
250-AUTH PLAIN LOGIN
250-AUTH=PLAIN LOGIN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
AUTH PLAIN AHl5eQB4eHg=
235 2.7.0 Authentication successful

Note that even though we use plain text username and password which are only base64-encoded, they are sent encrypted over the network because of the starttls command line option. 

Share