A Red Hat training naturally is available for RHEL 8

Chapter 1. Deploying VDO

As a system administrator, you can use VDO to create deduplicated and compressed storehouse pools.

1.1. Introduction to VDO

Virtual Data Optimizer ( VDO ) provides inline data reduction for Linux in the form of deduplication, compaction, and flimsy provision. When you set up a VDO bulk, you specify a block device on which to construct your VDO volume and the sum of coherent storage you plan to present .

  • When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1 logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it as 10 TB of logical storage.
  • For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage.

In either subject, you can simply put a file system on top of the legitimate device presented by VDO and then use it directly or as part of a distribute cloud storage architecture. Because VDO is thinly provisioned, the file system and applications only see the coherent space in use and are not aware of the actual physical space available. Use scripting to monitor the actual available space and generate an alert if use exceeds a threshold : for case, when the VDO volume is 80 % fully. Additional resources

  • For more information about monitoring physical space, see Section 2.1, “Managing free space on VDO volumes”.

1.2. VDO deployment scenarios

You can deploy VDO in a diverseness of ways to provide deduplicated storage for :

  • both block and file access
  • both local and remote storage

Because VDO exposes its deduplicated repositing as a standard Linux barricade device, you can use it with standard file systems, iSCSI and FC target drivers, or as incorporate storage. notice deployment of VDO volumes on crown of Ceph RADOS Block Device ( RBD ) is presently supported. however, the deployment of Red Hat Ceph Storage bunch components on crown of VDO volumes is presently not supported .


You can deploy VDO on a KVM server configured with Direct Attached Storage .VDO Deployment with KVM

File systems

You can create file systems on top of VDO and expose them to NFS or CIFS users with the NFS server or Samba .Deduplicated NAS

Placement of VDO on iSCSI

You can export the entirety of the VDO repositing target as an iSCSI prey to remote iSCSI initiators .Deduplicated block storage target When creating a VDO volume on iSCSI, you can place the VDO bulk above or below the iSCSI level. Although there are many considerations to be made, some guidelines are provided here to help you select the method acting that best suits your environment. When placing the VDO bulk on the iSCSI waiter ( target ) below the iSCSI layer :

  • The VDO volume is transparent to the initiator, similar to other iSCSI LUNs. Hiding the thin provisioning and space savings from the client makes the appearance of the LUN easier to monitor and maintain.
  • There is decreased network traffic because there are no VDO metadata reads or writes, and read verification for the dedupe advice does not occur across the network.
  • The memory and CPU resources being used on the iSCSI target can result in better performance. For example, the ability to host an increased number of hypervisors because the volume reduction is happening on the iSCSI target.
  • If the client implements encryption on the initiator and there is a VDO volume below the target, you will not realize any space savings.

When placing the VDO book on the iSCSI client ( instigator ) above the iSCSI layer :

  • There is a potential for lower network traffic across the network in ASYNC mode if achieving high rates of space savings.
  • You can directly view and control the space savings and monitor usage.
  • If you want to encrypt the data, for example, using dm-crypt, you can implement VDO on top of the crypt and take advantage of space efficiency.


On more feature-rich systems, you can use LVM to provide multiple legitimate whole numbers ( LUNs ) that are all backed by the same deduplicated storage consortium. In the following diagram, the VDO target is registered as a physical volume so that it can be managed by LVM. Multiple logical volumes ( LV1 to LV4 ) are created out of the deduplicated storehouse pool. In this direction, VDO can support multiprotocol unite auction block or file entree to the underlying deduplicated repositing pool .Deduplicated unified storage Deduplicated incorporate memory design enables for multiple file systems to jointly use the lapp deduplication knowledge domain through the LVM tools. besides, file systems can take advantage of LVM snapshot, copy-on-write, and shrink or grow features, all on top of VDO .


Device Mapper ( DM ) mechanisms such as DM Crypt are compatible with VDO. Encrypting VDO volumes helps ensure data security, and any file systems above VDO are still deduplicated .Using VDO with encryption authoritative Applying the encoding layer above VDO results in small if any datum deduplication. Encryption makes duplicate blocks different before VDO can deduplicate them. Always place the encoding layer below VDO. When creating a VDO volume on iSCSI, you can place the VDO volume above or below the iSCSI layer. Although there are many considerations to be made, some guidelines are provided here to help you select the method acting that best suits your environment.

SEE ALSO  3CE Plumping Lips - Clear - Korean Makeup | Carsha

1.3. Components of a VDO volume

VDO uses a stuff device as a back shop, which can include an collection of physical memory consist of one or more disks, partitions, or even compressed files. When a storehouse management tool creates a VDO volume, VDO reserves volume quad for the UDS index and VDO volume. The UDS index and the VDO volume interact together to provide deduplicated block storehouse. Figure 1.1. VDO disk organizationVDO disk organization The VDO solution consists of the adopt components :

A kernel module that loads into the Linux Device Mapper level provides a deduplicated, compressed, and thinly provisioned pulley repositing bulk. The kvdo module exposes a jam device. You can access this barricade device directly for block storehouse or present it through a Linux file system, such as XFS or ext4. When kvdo receives a request to read a legitimate pulley of data from a VDO book, it maps the requested logical block to the underlie forcible auction block and then reads and returns the request data. When kvdo receives a request to write a auction block of data to a VDO volume, it first checks whether the request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions is dependable, kvdo updates its block map and acknowledges the request. differently, VDO processes and optimizes the data .
A kernel module that communicates with the Universal Deduplication Service ( UDS ) index on the volume and analyzes data for duplicates. For each new slice of data, UDS cursorily determines if that musical composition is identical to any previously stored piece of data. If the exponent finds a catch, the storage system can then internally reference the existing detail to avoid storing the like information more than once. The UDS index runs inside the kernel as the uds kernel module .
Command channel tools
For configuring and managing optimized storage.

1.4. The physical and logical size of a VDO volume

This section describes the forcible size, available forcible size, and coherent size that VDO can utilize :

physical size
This is the lapp size as the underlying obstruct device. VDO uses this repositing for :

  • User data, which might be deduplicated and compressed
  • VDO metadata, such as the UDS index
available physical size
This is the helping of the physical size that VDO is able to use for exploiter data It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the bulk into slab by the given slab size .
legitimate size
This is the provision size that the VDO volume presents to applications. It is normally larger than the available physical size. If the --vdoLogicalSize choice is not specified, then the provision of the logical volume is now provisioned to a 1:1 ratio. For model, if a VDO volume is put on top of a 20 GB blocking device, then 2.5 GB is reserved for the UDS exponent ( if the default exponent size is used ). The remaining 17.5 GB is provided for the VDO metadata and drug user data. As a resultant role, the available memory to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO volume. VDO presently supports any coherent size up to 254 times the size of the physical volume with an absolute utmost coherent size of 4PB .

Figure 1.2. VDO disk organization In this figure, the VDO deduplicated repositing target sits wholly on crown of the block device, meaning the physical size of the VDO book is the same size as the underlying block device. Additional resources

  • For more information on how much storage VDO metadata requires on block devices of different sizes, see Section 1.6.4, “Examples of VDO requirements by physical size”.

1.5. Slab size in VDO

The physical storage of the VDO volume is divided into a number of slabs. Each slab is a contiguous region of the physical space. All of the slab for a given bulk have the lapp size, which can be any power of 2 multiple of 128 MB up to 32 GB. The default slab size is 2 GB to facilitate evaluating VDO on smaller test systems. A single VDO bulk can have up to 8192 slab. consequently, in the default configuration with 2 GB slab, the utmost allowed physical storage is 16 terbium. When using 32 GB slab, the maximal allowed physical storage is 256 TB. VDO always reserves at least one entire slab for metadata, and therefore, the reserved slab can not be used for storing user data. Slab size has no effect on the performance of the VDO volume. Table 1.1. Recommended VDO slab sizes by physical volume size

Physical volume size Recommended slab size
10–99 GB 1 GB
100 GB – 1 terbium 2 GB
2–256 terabyte 32 GB

note The minimal disk use for a VDO volume using default settings of 2 GB slab size and 0.25 dense index, requires approx 4.7 GB. This provides slightly less than 2 GB of physical data to write at 0 % deduplication or compression. here, the minimal harrow usage is the total of the default slab size and dense exponent. You can control the slab size by providing the --config 'allocation/vdo_slab_size_mb=size-in-megabytes' option to the lvcreate command.

SEE ALSO  รถเช่าภูเก็ต บริการเช่ารถในภูเก็ต - รถใหม่ บริการดี ราคาถูก อรุณรถเช่าภูเก็ต

1.6. VDO requirements

VDO has certain requirements on its placement and your system resources.

1.6.1. VDO memory requirements

Each VDO bulk has two distinct memory requirements :

The VDO faculty
VDO requires a fixed 38 MB of RAM and several variable amounts :

  • 1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache requires a minimum of 150MB RAM.
  • 1.6 MB of RAM for each 1 TB of logical space.
  • 268 MB of RAM for each 1 TB of physical storage managed by the volume.
The UDS index
The Universal Deduplication Service ( UDS ) requires a minimal of 250 MB of RAM, which is besides the default total that deduplication uses. You can configure the value when formatting a VDO volume, because the measure besides affects the amount of storage that the index needs. The memory required for the UDS index is determined by the index type and the ask size of the deduplication windowpane :

Index type Deduplication window Note
dense 1 TB per 1 GB of RAM A 1 GB dense index is broadly sufficient for up to 4 TB of physical memory .
sparse 10 TB per 1 GB of RAM A 1 GB sparse index is by and large sufficient for up to 40 TB of physical storage .

note The minimal disk use for a VDO volume using default settings of 2 GB slab size and 0.25 dense exponent, requires approx 4.7 GB. This provides slenderly less than 2 GB of physical data to write at 0 % deduplication or compression. here, the minimal harrow custom is the union of the default slab size and dense exponent. The UDS Sparse Indexing feature of speech is the commend mode for VDO. It relies on the temporal role vicinity of data and attempts to retain alone the most relevant index entries in memory. With the sparse index, UDS can maintain a deduplication window that is ten-spot times larger than with dense, while using the lapp amount of memory. Although the sparse exponent provides the greatest coverage, the dense index provides more deduplication advice. For most workloads, given the lapp sum of memory, the remainder in deduplication rates between dense and sparse indexes is negligible .

Additional resources

  • Examples of VDO requirements by physical size

1.6.2. VDO storage space requirements

You can configure a VDO book to use up to 256 TB of physical repositing. alone a certain part of the physical storage is functional to store data. This section provides the calculations to determine the available size of a VDO-managed book. VDO requires storehouse for two types of VDO metadata and for the UDS index :

  • The first type of VDO metadata uses approximately 1 MB for each 4 GB of physical storehouse plus an additional 1 MB per slab.
  • The second type of VDO metadata consumes approximately 1.25 MB for each 1 GB of coherent storage, rounded up to the nearest slab.
  • The amount of storage required for the UDS index depends on the type of index and the amount of RAM allocated to the index. For each 1 GB of RAM, a dense UDS index uses 17 GB of storage, and a sparse UDS index will use 170 GB of storage.

Additional resources

  • Section 1.6.4, “Examples of VDO requirements by physical size”
  • Section 1.5, “Slab size in VDO”

1.6.3. Placement of VDO in the storage stack

You should place sealed repositing layers under VDO and others above VDO. In this section, above means that when layer A is above level B, A is either stored directly on device B, or indirectly on a layer that is stored on B. similarly, A under B means that B is stored on A. A VDO volume is a thinly provision auction block device. To prevent running out of physical space, place the volume above a memory level that you can expand at a late fourth dimension. Examples of such expandable memory are LVM volumes or MD RAID arrays. You can place thick-provisioned layers above VDO, but you can not rely on the guarantees of thick provision in that character. Because the VDO layer is thin-provisioned, the effects of thin provisioning apply to all layers above it. If you do not monitor the VDO device, you might run out of physical space on thick-provisioned volumes above VDO. Supported configurations

  • Layers that you can place lone under VDO :
    • DM Multipath
    • DM Crypt
    • Software RAID (LVM or MD RAID)
  • Layers that you can place only above VDO :
    • LVM cache
    • LVM snapshots
    • LVM thin provisioning

Unsupported configurations

  • VDO above other VDO volumes
  • VDO above LVM snapshots
  • VDO above LVM cache
  • VDO above a loopback device
  • VDO above LVM thin provisioning
  • Encrypted volumes above VDO
  • Partitions on a VDO volume
  • RAID, such as LVM RAID, MD RAID, or any other type, above a VDO volume

Additional resources

  • For more information on stacking VDO with LVM layers, see the Stacking LVM volumes article.

1.6.4. Examples of VDO requirements by physical size

The adopt tables provide approximate system requirements of VDO based on the physical size of the implicit in volume. Each table lists requirements allow to the intended deployment, such as chief storage or accompaniment memory. The claim numbers depend on your shape of the VDO volume .

elementary storage deployment
In the primary storage case, the UDS exponent is between 0.01 % to 25 % the size of the physical size. Table 1.2. Storage and memory requirements for primary storage

Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type
10GB–1TB 250MB 472MB 2.5GB dense
2–10TB 1GB 3GB 10GB

250MB 22GB sparse
11–50TB 2GB 14GB 170GB sparse
51–100TB 3GB 27GB 255GB sparse
101–256TB 12GB 69GB 1020GB sparse
Backup repositing deployment
In the backup memory case, the UDS index covers the size of the stand-in set but is not bigger than the physical size. If you expect the stand-in put or the physical size to grow in the future, divisor this into the index size. Table 1.3. Storage and memory requirements for backup storage

Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type
10GB–1TB 250MB 472MB 2.5 GB dense
2–10TB 2GB 3GB 170GB sparse
11–50TB 10GB 14GB 850GB sparse
51–100TB 20GB 27GB 1700GB sparse
101–256TB 26GB 69GB 3400GB sparse

1.7. Installing VDO

This procedure installs software necessary to create, mount, and do VDO volumes. Procedure

  • Install the vdo and kmod-kvdo packages :
    # yum install vdo kmod-kvdo

1.8. Creating a VDO volume

This procedure creates a VDO volume on a block device. Prerequisites

  • Install the VDO software. See Section 1.7, “Installing VDO”.
  • Use expandable storage as the backing block device. For more information, see Section 1.6.3, “Placement of VDO in the storage stack”.

Procedure In all the following steps, replace vdo-name with the identifier you want to use for your VDO volume ; for exemplar, vdo1. You must use a different name and device for each exemplify of VDO on the system .

  1. Find a persistent name for the blocking device where you want to create the VDO volume. For more data on persistent names, see chapter 6, Overview of dogged naming attributes. If you use a non-persistent device diagnose, then VDO might fail to start properly in the future if the device name changes .
  2. Create the VDO bulk :
    # vdo create \
          --name= vdo-name \
          --device= block-device \
          --vdoLogicalSize= logical-size
    • Replace


      with the persistent name of the block device where you want to create the VDO volume. For example, /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f.

    • Replace logical-size with the total of legitimate storage that the VDO volume should present :
      • For active VMs or container storage, use logical size that is ten times the physical size of your block device. For example, if your block device is 1TB in size, use 10T here.
      • For object storage, use logical size that is three times the physical size of your block device. For example, if your block device is 1TB in size, use 3T here.
    • If the forcible barricade device is larger than 16TiB, add the --vdoSlabSize=32G choice to increase the slab size on the volume to 32GiB. Using the default slab size of 2GiB on block devices larger than 16TiB results in the vdo create dominate failing with the follow error :
      vdo: ERROR - vdoformat: formatVDO failed on '/dev/ 


      ': VDO Status: Exceeds maximum number of slabs supported

    Example 1.1. Creating VDO for container storage For case, to create a VDO volume for container storage on a 1TB block device, you might use :

    # vdo create \
          --name= vdo1 \
          --device= /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f \
          --vdoLogicalSize= 10T

    important If a failure occurs when creating the VDO volume, remove the volume to clean up. See section 2.10.2, “ Removing an unsuccessfully created VDO volume ” for details .

  3. Create a file system on top of the VDO book :
    • For the XFS file system :
      # mkfs.xfs -K /dev/mapper/ vdo-name
    • For the ext4 file organization :
      # mkfs.ext4 -E nodiscard /dev/mapper/ vdo-name
  4. Use the following command to wait for the system to register the new device node :
    # udevadm settle

Next steps

  1. Mount the file system. See Section 1.9, “Mounting a VDO volume” for details.
  2. Enable the discard feature for the file system on your VDO device. See Section 1.10, “Enabling periodic block discard” for details.

Additional resources

  • The vdo(8) man page

1.9. Mounting a VDO volume

This operation mounts a file arrangement on a VDO volume, either manually or persistently. Prerequisites

  • A VDO volume has been created on your system. For instructions, see Section 1.8, “Creating a VDO volume”.


  • To mount the file organization on the VDO volume manually, use :
    # mount /dev/mapper/ 



  • To configure the file system to mount mechanically at boot, add a lineage to the /etc/fstab file :
    • For the XFS file organization :



      xfs defaults,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0
    • For the ext4 charge system :



      ext4 defaults,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0

    If the VDO volume is located on a obstruct device that requires net, such as iSCSI, add the _netdev mount option .

Additional resources

  • The vdo(8) man page.
  • For iSCSI and other block devices requiring network, see the systemd.mount(5) man page for information on the _netdev mount option.

1.10. Enabling periodic block discard

This procedure enables a systemd timer that regularly discards unused blocks on all supported file systems. Procedure

  • enable and start the systemd timer :
    # systemctl enable --now fstrim.timer

1.11. Monitoring VDO

This routine describes how to obtain usage and efficiency information from a VDO volume. Prerequisites

  • Install the VDO software. See Section 1.7, “Installing VDO”.


  • Use the vdostats utility to get information about a VDO volume :
    # vdostats --human-readable
    Device                   1K-blocks    Used     Available    Use%    Space saving%
    /dev/mapper/node1osd1    926.5G       21.0G    905.5G       2%      73%
    /dev/mapper/node1osd2    926.5G       28.2G    898.3G       3%      64%

Additional resources

  • The vdostats(8) man page.
source : https://usakairali.com
Category : Make up



https://www.antiquavox.it/live22-indonesia/ https://ogino.co.uk/wp-includes/slot-gacor/ https://overmarket.pl/wp-includes/slot-online/ https://www.amarfoundation.org/slot-gacor/