Hardware - What about all these chips ?

How to optimize SSD usage ?

  1. The only SSD drives I'm using so far are in Debian machines, so I will only list commands and tools related to this operating system. Nevertheless, overall concepts may apply to other environments.
  2. "Optimizing" means finding the best mix of :
    • read / write performance
    • drive lifetime
    • configuration / tweaking time
  3. The Internetz suggests lots of things / hacks. Some are true, some are not, and most of them may just be obsolete.
Statement : "You MUST ..." Status Comment
... buy a SLC SSD drive it depends
  • SLC, MLC and TLC SSD drives all have their pro's and con's. From the fastest / highest number of writes / most expensive to the slowest / lowest writes / cheapest (sources : 1, 2) :
    1. SLC
    2. MLC
    3. TLC
  • buy one that fits both your needs and your budget
... leave space for over-provisioning true
  • this is considered a safe practice, the only dispute being the amount of unallocated space : 7% of the drive capacity, with a maximum of 10GB. Newer drives have some built-in unallocated space for this, which isn't accessible to the user, so no need to leave unused space. (source)
  • this is not completely clear to me whether "over-provisioning" is :
    • having some free space on the drive (within partitioned space)
    • OR leaving some partitioned space
... align partitions to 32 bits / 4K false
  • the requirement to align partitions was obsolete even before SSDs existed. This only comes from a limitation in the BIOS itself, and has nothing to do with SSD drives or GNU/Linux (source)
  • current tools (i.e. in 2017) already handle this automatically (source)
... have SWAP and tmpfs filesystems NOT on the SSD true
  • this will save writes on the SSD and make it last longer
  • Debian already puts tmpfs filesystems in memory (details)
... use ext4 true
  • ext4 works just fine with SSDs (sources : 1, 2)
... use the noatime mount option true
... use the discard mount option false
... use the data=ordered mount option false
... setup a regular TRIM with cron false
  • TRIM is not necessary : in some situations, TRIM can improve speed —in other cases, it can make the system significantly slower. And it is only ever a help until the disk is getting fairly full.
  • What do you lose if you DON'T enable TRIM ?
    When a filesystem deletes a file, it knows the logical blocks are free, but the SSD keeps them around. When the filesystem re-uses them for new data, the SSD then knows that the old physical blocks can be garbage-collected and re-used. So all you are really doing by not using TRIM is delaying the collection of unneeded blocks. As long as the SSD has plenty of spare blocks (remember over-provisioning ?), TRIM gains you nothing at all here.
(source)
... worry about wear leveling TODO
... use SSDs in RAID for extreme performance TODO I've read somewhere this is a bad idea, offering no performance improvement

Endianness : little-endian vs big-endian

Endianness refers to the convention used to handle multi-byte values :

Little-endian :
The least significant byte value is at the lowest address. The other bytes follow in increasing order of significance.
Big-endian :
The most significant byte value is at the lowest address. This is akin to Left-to-Right reading in hexadecimal order.

The Intel x86 processor architecture uses little-endian.

What's an ICH ?

ICH stands for I/O Controller Hub. It manages all ports and buses :

What happens during the POST ?

  1. Test Random Access Memory
  2. Conduct an inventory of the hardware devices installed in the computer
  3. Configure hard and floppy disks, keyboard, monitor, serial and parallel ports
  4. Configure other devices installed in the computer such as CD-ROM drives and sound cards
  5. Initialize computer hardware required for computer features such as Plug and Play and Power Management
  6. Run Setup if requested
  7. Load and run the Operating System such as DOS, OS/2, UNIX, or Windows 95 or NT

USB pin assignment

USB cables / connectors are made of 5 wires :

Wires and colors
Female connector
Common motherboard pin-out (from Intel design guides)

RAID levels

Level Usage Details Pro's Con's
0 striping
Disk 1
Block 1
Block 3
Block 5
Disk 2
Block 2
Block 4
Block 6
maximal write performance no data protection
1 mirroring
Disk 1
Block 1
Block 2
Block 3
Disk 2
Block 1
Block 2
Block 3
highest-performing RAID level lower usable capacity
5 striping with rotating parity
Disk 1
Block 1
Block 4
Parity 7+8+9
Disk 2
Block 2
Parity 4+5+6
Block 7
Disk 3
Block 3
Block 5
Block 8
Disk 4
Parity 1+2+3
Block 6
Block 9
6 striping with rotating double-parity
Disk 1
Block 1
Block 4
Block 7
Parity 10+11+12 (1)
Parity 13+14+15 (2)
Disk 2
Block 2
Block 5
Parity 7+8+9 (1)
Parity 10+11+12 (2)
Block 13
Disk 3
Block 3
Parity 4+5+6 (1)
Parity 7+8+9 (2)
Block 10
Block 14
Disk 4
Parity 1+2+3 (1)
Parity 4+5+6 (2)
Block 8
Block 11
Block 15
Disk 5
Parity 1+2+3 (2)
Block 6
Block 9
Block 12
Parity 13+14+15 (1)

Some good thoughts / notes about RAID

  • Initially, RAID meant Redundant Array of Inexpensive Disks, but changed for Redundant Array of Independent Disks during the 1990s (source)
  • (I an not the author of this comment : the "I" below is not me. source)

    I used to think RAID 5 and RAID 6 were the best RAID configs to use. It seemed to offer the best bang for buck. However, after seeing how long it took to rebuild an array after a drive failed (over 3 days), I'm much more hesitant to use those RAIDs. I much rather prefer RAID 1+0 even though the overall cost is nearly double that of RAID 5. It's much faster, and there is no rebuild process if the RAID controller is smart enough. You just swap failed drives, and the RAID controller automatically utilizes the backup drive and then mirrors onto the new drive. Just much faster and much less prone to multiple drive failures killing the entire RAID.

    The main reason is not only the rebuild time (which indeed is horrible for a loaded system), but also the performance characteristics that can be terrible if you do a lot of writing.
    We for example more than doubled the throughput of Cassandra by dropping RAID5. So the saving of 2 disks per server in the end required buying almost twice as many servers. (source)

    There is never a case when RAID 5 is the best choice, ever. There are cases where RAID 0 is mathematically proven more reliable than RAID 5. RAID 5 should never be used for anything where you value keeping your data. I am not exaggerating when I say that very often, your data is safer on a single hard drive than it is on a RAID 5 array : the problem is that once a drive fails, during the rebuild, if any of the surviving drives experience an URE, the entire array will fail. On consumer-grade SATA drives that have a URE rate of 1 in 1014, that means if the data on the surviving drives totals 12TB, the probability of the array failing rebuild is close to 100% (1014 is bits. When you divide 1014 by 8 you get 12.5 trillion bytes, or 12.5 TB). Enterprise SAS drives are typically rated 1 URE in 1015, so you improve your chances ten-fold. Still an avoidable risk.
    RAID 6 suffers from the same fundamental flaw as RAID 5, but the probability of complete array failure is pushed back one level, making RAID 6 with enterprise SAS drives possibly acceptable in some cases, for now (until hard drive capacities get larger).
    I no longer use parity RAID. Always RAID 10. (source)

  • When comparing RAID 5 to RAID 10, RAID 5 apparently saves the price of 1 disk, but RAID 5 :
    • costs more in administration
    • provides lower I/O
    As a summary : RAID 5 is not worth the price anymore (source).
  • (Again, not the author of this one, source) There are 2 types of RAID controllers :
    1. fully hardware (rare and $$$)
    2. software on dedicated hardware. This last one is just another CPU which will perform the XORing computation. Chances are that a multi-core CPU can beat this controller if your box is used as a server. I would not recommend using this feature : if for any reason your controller failed or is no longer supported by a new software release, you will loose your data. Using a software RAID, in case of system failure, you can mount your disk on another system.

ACPI power states

ACPI State Description Power consumption Comments
S1 Standby Low
< 50 Watts (desktop)
S2 (not used)
S3 "Save to RAM". RAM, USB and PME are powered on. S3 < S1
< 15 Watts (desktop)
S4 "Save to File" = Hibernate Minimum
< 5 Watts (desktop)
The context is saved to disk.
S5 Computer software-powered off. The only difference between the S4 and S5 sleeping states is that no context is saved in S5
S6 Computer unplugged from power supply. 0