Hardware - What about all these chips ?

mail

How to test the Power Supply Unit (PSU) of a desktop PC ?

Situation

Something is definitely going wrong, but I have to check whether this is :

Details

Wiring of 20 and 24 pins ATX PSU connectors
More on PSUs at lifewire.com, smpspowersupply.com

Solution

  1. while your PC is still connected to the wall plug, touch a bare metal part of its chassis to ground yourself and avoid damaging electronics with electrostatic discharge
  2. as a general safety measure, unplug the PSU from the AC input before opening the computer case : electricity and screwdrivers can break bad
  3. unplug all DC outputs from the PSU :
    • motherboard
    • CPU
    • disks
    • video card
    because :
    • you don't _REALLY_ want to power on everything with a possibly damaged PSU
    • the test you'll proceed to on the next step is a power on-then-off, so it's better not to inflict it to fragile electronics
  4. put screwdriver and the likes away and replug the AC input
  5. identify the PS-ON pin on the 20/24 pins ATX connector using the picture above : it's THE one with the green wire (number 16 on the 24 pins connector)
  6. connect it to any ground pin : the pins 15 and 17 that are next to it are perfect for this
  7. a functional PSU should start now
    • modern units are extremely quiet : check the internal PSU fan is spinning
    • in case of a fanless PSU :
      1. maintain the PS-ONground connection
      2. check voltages
  8. the fan will stop / voltages will drop to 0 once you remove the PS-ONground connection
mail

Endianness : little-endian vs big-endian

Endianness refers to the convention used to handle multi-byte values :

Little-endian :
The least significant byte value is at the lowest address. The other bytes follow in increasing order of significance.
Big-endian :
The most significant byte value is at the lowest address. This is akin to Left-to-Right reading in hexadecimal order.

The Intel x86 processor architecture uses little-endian.

mail

What's an ICH ?

ICH stands for I/O Controller Hub. It manages all ports and buses :

mail

What happens during the POST ?

  1. Test the RAM
  2. Conduct an inventory of the hardware devices installed in the computer
  3. Configure hard and floppy disks, keyboard, monitor, serial and parallel ports
  4. Configure other devices installed in the computer such as CD-ROM drives and sound cards
  5. Initialize computer hardware required for computer features such as Plug and Play and Power Management
  6. Run Setup if requested
  7. Load and run the Operating System such as DOS, OS/2, UNIX, or Windows 95 or NT
mail

USB pin assignment

USB cables / connectors are made of 5 wires :

Wires and colors

Female connector

Common motherboard pin-out (from Intel design guides)

mail

RAID levels

Level Usage Details Pro's Con's
0 striping
Disk 1
Block 1
Block 3
Block 5
Disk 2
Block 2
Block 4
Block 6
maximal write performance no data protection
1 mirroring
Disk 1
Block 1
Block 2
Block 3
Disk 2
Block 1
Block 2
Block 3
highest-performing RAID level lower usable capacity
5 striping with rotating parity
Disk 1
Block 1
Block 4
Parity 7+8+9
Disk 2
Block 2
Parity 4+5+6
Block 7
Disk 3
Block 3
Block 5
Block 8
Disk 4
Parity 1+2+3
Block 6
Block 9
6 striping with rotating double-parity
Disk 1
Block 1
Block 4
Block 7
Parity 10+11+12 (1)
Parity 13+14+15 (2)
Disk 2
Block 2
Block 5
Parity 7+8+9 (1)
Parity 10+11+12 (2)
Block 13
Disk 3
Block 3
Parity 4+5+6 (1)
Parity 7+8+9 (2)
Block 10
Block 14
Disk 4
Parity 1+2+3 (1)
Parity 4+5+6 (2)
Block 8
Block 11
Block 15
Disk 5
Parity 1+2+3 (2)
Block 6
Block 9
Block 12
Parity 13+14+15 (1)

Some good thoughts / notes about RAID

  • Initially, RAID meant Redundant Array of Inexpensive Disks, but changed for Redundant Array of Independent Disks during the 1990s (source)
  • I am not the author of this comment : the "I" below is not me (source)

    I used to think RAID 5 and RAID 6 were the best RAID configs to use. It seemed to offer the best bang for buck. However, after seeing how long it took to rebuild an array after a drive failed (over 3 days), I'm much more hesitant to use those RAIDs. I much rather prefer RAID 1+0 even though the overall cost is nearly double that of RAID 5. It's much faster, and there is no rebuild process if the RAID controller is smart enough. You just swap failed drives, and the RAID controller automatically utilizes the backup drive and then mirrors onto the new drive. Just much faster and much less prone to multiple drive failures killing the entire RAID.

    The main reason is not only the rebuild time (which indeed is horrible for a loaded system), but also the performance characteristics that can be terrible if you do a lot of writing.
    We for example more than doubled the throughput of Cassandra by dropping RAID5. So the saving of 2 disks per server in the end required buying almost twice as many servers. (source)

    There is never a case when RAID 5 is the best choice, ever. There are cases where RAID 0 is mathematically proven more reliable than RAID 5. RAID 5 should never be used for anything where you value keeping your data. I am not exaggerating when I say that very often, your data is safer on a single hard drive than it is on a RAID 5 array : the problem is that once a drive fails, during the rebuild, if any of the surviving drives experience an URE, the entire array will fail. On consumer-grade SATA drives that have a URE rate of 1 in 1014, that means if the data on the surviving drives totals 12TB, the probability of the array failing rebuild is close to 100% (1014 is bits. When you divide 1014 by 8 you get 12.5 trillion bytes, or 12.5 TB). Enterprise SAS drives are typically rated 1 URE in 1015, so you improve your chances ten-fold. Still an avoidable risk.
    RAID 6 suffers from the same fundamental flaw as RAID 5, but the probability of complete array failure is pushed back one level, making RAID 6 with enterprise SAS drives possibly acceptable in some cases, for now (until hard drive capacities get larger).
    I no longer use parity RAID. Always RAID 10. (source)

  • When comparing RAID 5 to RAID 10, RAID 5 apparently saves the price of 1 disk, but RAID 5 :
    • costs more in administration
    • provides lower I/O
    As a summary : RAID 5 is not worth the price anymore (source).
  • (Again, not the author of this one, source) There are 2 types of RAID controllers :
    1. fully hardware (rare and $$$)
    2. software on dedicated hardware. This last one is just another CPU which will perform the XORing computation. Chances are that a multi-core CPU can beat this controller if your box is used as a server. I would not recommend using this feature : if for any reason your controller failed or is no longer supported by a new software release, you will lose your data. Using a software RAID, in case of system failure, you can mount your disk on another system.
mail

ACPI power states

ACPI State Description Power consumption Comments
S1 Standby Low
< 50 Watts (desktop)
S2 (not used)
S3 "Save to RAM". RAM, USB and PME are powered on. S3 < S1
< 15 Watts (desktop)
S4 "Save to File" = Hibernate Minimum
< 5 Watts (desktop)
The context is saved to disk.
S5 Computer software-powered off. The only difference between the S4 and S5 sleeping states is that no context is saved in S5
S6 Computer unplugged from power supply. 0