Skip to content

FAQ on storage concepts

Posted on:May 29, 2024 at 07:20 AM

How does OS detect and manage physical storage devices?

  1. Communication between OS and Storage Device
    1. Widely known types
      1. SATA
      2. NVMe
      3. SCSI
    2. Above protocols allow OS to detect storage devices connected physically
  2. Show storage devices
    1. Linux: Under /dev

      1. Files named sd[X] indicate storage devices, usually refered as Device Node

      Untitled.png

    2. Windows: Device manager

Onboarding steps for a physical storage to an accessible FS

  1. Physical Installation
    1. Physically connect the SCSI device to the appropriate SCSI controller or adapter on your system. Ensure that the device is properly powered and connected
  2. Check for Device Recognition
    1. dmesg | grep SCSI
  3. Verify Device Availability
    1. Use commands like lsblk or fdisk -l to list block devices and verify that the new SCSI device is visible. It should be listed as a block device (e.g., /dev/sdX)
  4. Partitioning
    1. If the SCSI device is new or hasn’t been partitioned, you may need to create partitions using a tool like fdisk or parted.
    2. sudo fdisk /dev/sdX
  5. Format Partitions
    1. Once partitions are created (if needed), format them with a file system. For example, use mkfs to create an ext4 file system.
    2. sudo mkfs.ext4 /dev/sdX1
  6. Mount the File System
    1. Create a mount point and mount the file system to make it accessible
    2. sudo mkdir /mnt/mydevice sudo mount /dev/sdX1 /mnt/mydevice
  7. Automate Mounting (Optional)
    1. Ensure the device is mounted automatically on system boot, add an entry to the /etc/fstab file

Mount man page

https://linux.die.net/man/8/mount

The argument following the -t is used to indicate the filesystem type. The filesystem types which are currently supported include: adfsaffsautofscifscodacoherentcramfsdebugfsdevptsefsextext2ext3ext4hfshfsplushpfsiso9660jfsminixmsdosncpfsnfsnfs4ntfsprocqnx4ramfsreiserfsromfssquashfssmbfssysvtmpfsubifsudfufsumsdosusbfsvfatxenixxfsxiafs. Note that coherent, sysv and xenix are equivalent and that xenix and coherent will be removed at some point in the future - use sysv instead. Since kernel version 2.1.21 the types ext and xiafs do not exist anymore. Earlier, usbfs was known as usbdevfs. Note, the real list of all supported filesystems depends on your kernel.

Network File Systems:

Local File Systems:

Special/Pseudo File Systems:

Other/Unknown:

Comparison between three network file system protocols

  1. Origin and Platform Support:
    • NFS: Originated from Sun Microsystems, primarily for Unix/Linux.
    • SMB/CIFS: Developed by Microsoft for Windows, but also supported on other platforms.
  2. Cross-Platform Compatibility:
    • NFS: Native to Unix/Linux but available on other systems.
    • SMB/CIFS: Designed for seamless sharing within Windows networks, but also compatible with other platforms.
  3. Authentication and Security:
    • NFS: Improved security in NFSv4 with support for stronger authentication.
    • SMB/CIFS: Supports various authentication methods, including workgroup and domain-based options.
  4. Performance:
    • NFS: Generally faster in Unix/Linux environments.
    • SMB/CIFS: Performance improvements over time; influenced by server and client implementations.
  5. File Locking:
    • NFS: Stateless file locking; less strict.
    • SMB/CIFS: Supports sophisticated file locking, including mandatory and advisory mechanisms.
  6. Ease of Use:
    • NFS: Configuration can be complex, especially regarding security.
    • SMB/CIFS: Generally easier to set up, especially within Windows environments.
  7. Versioning:
    • NFS: Various versions, with NFSv4 being the latest major version.
    • SMB/CIFS: Evolved versions, with SMBv3 being the latest major version, introducing encryption and performance improvements.

Workflow for mounting a CSI-supported storage

CSI spec: https://github.com/container-storage-interface/spec/blob/master/spec.md#architecture

K8S CSI volume plugin design spec: https://github.com/kubernetes/design-proposals-archive/blob/main/storage/container-storage-interface.md

When a pod decides to mount a AzureFiles volume:

  1. Mount volume to the node
    1. Kubelet ask CSI driver to handle mount request. CSI driver prepare storage, then call mount command to mount it to node to a kubelet-specified path and ensures proper access controls.
  2. Mount volume to container
    1. Kubelet creates container and mount the same volume from node.