forked from zfsonlinux/zfsonlinux.github.com
-
Notifications
You must be signed in to change notification settings - Fork 0
/
faq.html
842 lines (796 loc) · 28.9 KB
/
faq.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
<html>
<head>
<title>ZFS on Linux</title>
<meta name="keyword" content="zfs, linux" />
<meta name="description" content="ZFS on Linux FAQ." />
<meta name="robots" content="all" />
</head>
<body>
<center>
<a href="index.html"><img title="Native ZFS on Linux" alt="Native ZFS on Linux" src="images/zfs-linux.png"></a>
<table width=80%>
<tr bgcolor="#aaaaaa">
<th> FAQ - Table of Contents</th>
</tr>
<tr>
<td>
<ul>
<li><a href="#WhatAboutTheLicensingIssue">
1.1 What is the licensing concern?</a></li>
<li><a href="#HowDoIInstallIt">
1.2 How do I install it?</a></li>
<li><a href="#WhyDoesntItBuild">
1.3 Why doesn’t it build?</a></li>
<li><a href="#HowDoIMountTheFileSystem">
1.4 How do I mount the file system?</a></li>
<li><a href="#WhyShouldIUseA64BitSystem">
1.5 Why should I use a 64-bit system?</a></li>
<li><a href="#WhatKernelVersionsAreSupported">
1.6 What kernel versions are supported?</a></li>
<li><a href="#WhatDevNamesShouldIUseWhenCreatingMyPool">
1.7 What /dev/ names should I used when creating my pool?</a></li>
<li><a href="#HowDoIChangeNamesOnAnExistingPool">
1.8 How do I change the /dev/ names on an existing pool?</a></li>
<li><a href="#HowDoISetupVdevIdConf">
1.9 How do I setup the /etc/zfs/vdev_id.conf file?</a></li>
<li><a href="#PerformanceConsideration">
1.10 What’s going on with performance?</a></li>
<li><a href="#WhatDoesZpoolCacheDo">
1.11 What does the /etc/zfs/zpool.cache file do?</a></li>
<li><a href="#HowDoISetupShares">
1.12 How do I setup an NFS or SMB shares?</a></li>
<li><a href="#CanIBootFromZFS">
1.13 Can I boot from ZFS root?</a></li>
<li><a href="#HowDoIAutomaticallyMountZFSFilesystemsDuringStartup">
1.14 How do I automatically mount ZFS filesystems during startup?</a></li>
<li><a href="#HowDoesZFSonLinuxHandlesAdvacedFormatDrives">
1.15 How does ZFS on Linux handles Advanced Format (4 KB sector) drives?</a></li>
<li><a href="#DoIHaveToUseECCMemory">
1.16 Do I have to use ECC memory?</a></li>
<li><a href="#CanIUseaZVOLforSwap">
1.17 Can I use a ZVOL for swap?</a></li>
<li><a href="#HowDoIGenerateTheZpoolCacheFile">
1.18 How do I generate the /etc/zfs/zpool.cache file?</a></li>
<li><a href="#CanIRunZFSForXenHypervisor">
1.19 Can I run ZFS on Xen Hypervisor or Xen Dom0?</a></li>
<li><a href="#HowCanIHelp">
2.1 How can I help?</a></li>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhatAboutTheLicensingIssue">
1.1 What is the licensing concern?</a>
</th>
</tr>
<tr>
<td>
<p> ZFS is licensed under the <a href="http://hub.opensolaris.org/bin/
view/Main/opensolaris_license">Common Development and Distribution
License (CDDL), </a> and the Linux kernel is licensed under the
<a href="http://www.gnu.org/licenses/gpl2.html">GNU General Public License
Version 2 (GPLv2)</a>. While both are free open source licenses they are
restrictive licenses. The combination of them causes problems because it
prevents using pieces of code exclusively available under one license with
pieces of code exclusively available under the other in the same binary.
In the case of the kernel, this prevents us from distributing ZFS as part
of the kernel binary. However, there is nothing in either license that
prevents distributing it in the form of a binary module or in the form
of source code.</p>
<p> For further reading on this issue see the following excellent article
regarding non-GPL licensed kernel modules.</p>
<ul>
<li>
<a href="http://www.networkworld.com/news/2006/120606-closed-modules1.html">
Encouraging closed source modules part 1: copyright and software </a>
</li>
<li>
<a href="http://www.networkworld.com/news/2006/120806-closed-modules2.html">
Encouraging closed source modules part 2: law and the module interface </a>
</li>
<li>
<a href="http://www.networkworld.com/news/2006/121106-closed-modules3.html">
Encouraging closed source modules part 3: elimating the "API update tax" </a>
</ul>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoIInstallIt">
1.2 How do I install it?</a>
</th>
</tr>
<tr>
<td>
<p>ZFS on Linux is available for numerous distributions and the installation
process largely depends on the package manager. The following distributions
all have support for ZFS and documentation on how it can be installed. If
your distribution isn't listed below you can build ZFS using the officially
released tarballs.</p>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/ZFS">Arch Linux</a></li>
<li><a href="debian.html">Debian</a></li>
<li><a href="fedora.html">Fedora</a></li>
<li><a href="http://www.funtoo.org/ZFS_Fun">Funtoo</a></li>
<li><a href="http://wiki.gentoo.org/wiki/ZFS">Gentoo</a></li>
<li><a href="epel.html">RHEL / CentOS / SL</a></li>
<li><a href="http://www.sabayon.org/">Sabayon</a></li>
<li><a href="https://www.sprezzatech.com/wiki/index.php/ZFS">SprezzOS</a></li>
<li><a href="https://launchpad.net/~zfs-native/+archive/stable">Ubuntu</a></li>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhyDoesntItBuild">
1.3 Why doesn’t it build?</a>
</th>
</tr>
<tr>
<td>
<p>Building a kernel module against an arbitrary kernel version is a
complicated thing to do. Every Linux distribution has their own idea of
how this should be done. It depends on the base kernel version, any
distribution specific patches, and exactly how the kernel was configured.
If you run in to problems here are few thing to check. If none of these
things explain your problem, then please open a new
<a href="http://github.com/zfsonlinux/zfs/issues">issue</a>
which fully describes the problem.</p>
<ul>
<li> The kernel API changes frequently, version 0.6.2 supports
<b>2.6.26 - 3.11</b> kernels. </li>
<li> There are lots of Linux distributions, <a href=
"#WhatKernelVersionsAreSupported">only these releases</a>
are part of the automated build/regression testing.</li>
<li> This may be a known issue check the
<a href="http://github.com/zfsonlinux/spl/issues/
labels/Build%20Issue">SPL</a> and
<a href="http://github.com/zfsonlinux/zfs/issues/
labels/Build%20Issue">ZFS</a> issue tracker. </li>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoIMountTheFileSystem">
1.4 How do I mount the file system?</a>
</th>
</tr>
<tr>
<td>
<p>A mountable dataset will be created and automatically mounted when you
first create the pool with <i>zpool create</i>. Additional datasets can be
created with <i>zfs create</i> and they will be automatically mounted.</p>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhyShouldIUseA64BitSystem">
1.5 Why should I use a 64-bit system?</a>
</th>
</tr>
<tr>
<td>
<p><b>You are strongly encouraged to use a 64-bit kernel. At the moment zfs
will build in a 32-bit environment but will not run stably.</b></p>
<p>In the Solaris kernel it is common practice to make heavy use of the
virtual address space because it is designed to work well. However, in
the Linux kernel most memory is addressed with a physical address and use of
the virtual address space is strongly discouraged. This is particularly true
on 32-bit arches where the virtual address space is limited to roughly 100MiB
by default. Using the virtual address space on 64-bit Linux kernels is also
discouraged. But in this case the address space is so much larger than
physical memory it is not as much of an issue.</p>
<p>If you are bumping up against the virtual memory limit you will see
the following message in your system logs. You can increase the virtual address size
with the boot option <em>vmalloc=512M</em>.</p>
<table bgcolor=#eeeeee> <td> <pre>
vmap allocation for size 4198400 failed: use vmalloc=<size> to increase size.
</pre> </td> </table> <br>
However, even after making this change your system will likely not be
entirely stable. Proper support for 32-bit systems is contingent upon the zfs code
being weaned off its dependence on virtual memory. This will take some time to do
correctly but it is planned for the Linux port. This change is also expected
to improve how efficiently zfs utilizes the systems memory. And can be further
leveraged to allow tighter integration with the standard Linux VM mechanisms.
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhatKernelVersionsAreSupported">
1.6 What kernel versions are supported?</a>
</th>
</tr>
<tr>
<td>
<p>The current spl/zfs-0.6.2 release supports Linux <b>2.6.26 - 3.11</b> kernels.
This covers most of the kernels used in the major Linux distributions. The following
distributions are regularly tested at LLNL using a buildbot based continuous integration
development model. If you need support for a newer kernel you may find it in the latest
github sources. </p>
<center><ul>
<table width=90%>
<tr>
<th align=left>RHEL</th>
<th align=left>Ubuntu</th>
<th align=left>Fedora</th>
<tr>
<td valign=top width=33%>
<li>RHEL 6.4 - x86_64</li>
<li>TOSS 2.1 - x86_64</li>
</td>
<td valign=top width=33%>
<li>Ubuntu 10.04 (Lucid) - x86_64</li>
<li>Ubuntu 10.10 (Maverick) - x86_64</li>
<li>Ubuntu 11.04 (Natty) - x86_64</li>
<li>Ubuntu 11.10 (Oneiric) - x86_64</li>
<li>Ubuntu 12.04 (Precise) - x86_64</li>
<li>Ubuntu 12.04 (Precise) - i386</li>
<li>Ubuntu 12.10 (Quantal) - x86_64</li>
<li>Ubuntu 13.04 (Raring) - x86_64</li>
</td>
<td valign=top width=33%>
<li>Fedora 13 (Goddard) - x86_64</li>
<li>Fedora 14 (Laughlin) - x86_64</li>
<li>Fedora 15 (Lovelock) - x86_64</li>
<li>Fedora 16 (Verne) - x86_64</li>
<li>Fedora 17 (Beefy Miracle) - x86_64</li>
<li>Fedora 18 (Spherical Cow) - x86_64</li>
<li>Fedora 19 (Schrodinger's Cat) - x86_64</li>
</td>
</tr>
<tr>
<th align=left>SLES</th>
<th align=left>Debian</th>
<th align=left>Other</th>
<tr>
<tr>
<td valign=top width=33%>
<li>openSUSE 12.3 (Dartmouth) - x86_64</li>
</td>
<td valign=top width=33%>
<li>Debian 5.0 (Lenny) - x86_64</li>
<li>Debian 6.0 (Squeeze) - x86_64</li>
</td>
<td valign=top width=33%>
<li>Proxmox 2.0 - x86_64</li>
<li>ArchLinux (Current) - x86_64</li>
<li>Gentoo (Current) - x86_64</li>
</td>
</tr>
</table>
</ul></center>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhatDevNamesShouldIUseWhenCreatingMyPool">
1.7 What /dev/ names should I use when creating my pool?</a>
</th>
</tr>
<tr>
<td>
<p>There are different /dev/ names that can be used when creating a ZFS pool.
Each option has advantages and drawbacks, the right choice for your ZFS
pool really depends on your requirements. For development and testing
using /dev/sdX naming is quick and easy. A typical home server might
prefer /dev/disk/by-id/ naming for simplicity and readability. While very
large configurations with multiple controllers, enclosures, and switches
will likely prefer /dev/disk/by-vdev naming for maximum control. But in
the end, how you choose to identify your disks is up to you.</p>
<ul>
<li>
<b>/dev/sdX, /dev/hdX:</b> Best for development/test pools
</li>
<ul>
<li>
<em>Summary:</em>
The top level /dev/ names are the default for consistency with other ZFS
implementations. They are available under all Linux distributions and are
commonly used. However, because they are not persistent they should only
be used with ZFS for development/test pools.
</li>
<li>
<em>Benefits:</em>This method is easy for a quick test, the names are short,
and they will be available on all Linux distributions.
</li>
<li>
<em>Drawbacks:</em>The names are not persistent and will change depending on
what order they disks are detected in. Adding or removing hardware for your
system can easily cause the names to change. You would then need to remove
the zpool.cache file and re-import the pool using the new names.
</li>
<li>
<em>Example:</em>
<pre>$ sudo zpool create tank sda sdb</pre>
</li>
</ul><br>
<li>
<b>/dev/disk/by-id/:</b> Best for small pools (less than 10 disks)
</li>
<ul>
<li>
<em>Summary:</em>
This directory contains disk identifiers with more human readable names.
The disk identifier usually consists of the interface type, vendor name,
model number, device serial number, and partition number. This approach
is more user friendly because it simplifies identifying a specific disk.
</li>
<li>
<em>Benefits:</em>
Nice for small systems with a single disk controller. Because the names
are persistent and guaranteed not to change, it doesn't matter how the disks
are attached to the system. You can take them all out, randomly mixed them
up on the desk, put them back anywhere in the system and your pool will still
be automatically imported correctly.
</li>
<li>
<em>Drawbacks:</em>
Configuring redundancy groups based on physical location becomes difficult
and error prone.
</li>
<li>
<em>Example:</em>
<pre>$ sudo zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP</pre>
</li>
</ul><br>
<li>
<b>/dev/disk/by-path/:</b> Good for large pools (greater than 10 disks)
</li>
<ul>
<li>
<em>Summary:</em>
This approach is to use device names which include the physical cable layout
in the system, which means that a particular disk is tied to a specific
location. The name describes the PCI bus number, as well as enclosure names
and port numbers. This allows the most control when configuring a large pool.
</li>
<li>
<em>Benefits:</em>
Encoding the storage topology in the name is not only helpful for locating a
disk in large installations. But it also allows you to explicitly layout your
redundancy groups over multiple adapters or enclosures.
</li>
<li>
<em>Drawbacks:</em>
These names are long, cumbersome, and difficult for a human to manage.
</li>
<li>
<em>Example:</em>
<pre>$ sudo zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0</pre>
</li>
</ul><br>
<li>
<b>/dev/disk/by-vdev/:</b> Best for large pools (greater than 10 disks)
</li>
<ul>
<li>
<em>Summary:</em>
This approach provides administrative control over device naming using
the configuration file /etc/zfs/vdev_id.conf. Names for disks in JBODs
can be generated automatically to reflect their physical location by
enclosure IDs and slot numbers. The names can also be manually assigned
based on existing udev device links, including those in
/dev/disk/by-path or /dev/disk/by-id. This allows you to pick your own
unique meaningful names for the disks. These names will be displayed by
all the zfs utilities so it can be used to clarify the administration of
a large complex pool. See the vdev_id and vdev_id.conf man pages for
further details.
</li>
<li>
<em>Benefits:</em>
The main benefit of this approach is that it allows you to choose
meaningful human-readable names. Beyond that, the benefits depend on the
naming method employed. If the names are derived from the physical path
the benefits of /dev/disk/by-path are realized. On the other hand,
aliasing the names based on drive identifiers or WWNs has the same
benefits as using /dev/disk/by-id.
</li>
<li>
<em>Drawbacks:</em>
This method relies on having a /etc/zfs/vdev_id.conf file properly configured
for your system. To configure this file please refer to section
<a href="#HowDoISetupVdevIdConf">1.9 How do I setup the /etc/zfs/vdev_id.conf file?</a>
As with benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path
may apply depending on the naming method employed.
</li>
<li>
<em>Example:</em>
<pre>$ sudo zpool create tank mirror A1 B1 mirror A2 B2</pre>
</li>
</ul>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoIChangeNamesOnAnExistingPool">
1.8 How do I change the /dev/ names on an existing pool?</a>
</th>
</tr>
<tr>
<td>
<p>Changing the /dev/ names on an existing pool can be done by simply
exporting the pool and re-importing it with the -d option to specify
which new names should be used. For example, to use the custom names
in /dev/disk/by-vdev:</p>
<pre>
$ sudo zpool export tank
$ sudo zpool import -d /dev/disk/by-vdev tank
</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoISetupVdevIdConf">
1.9 How do I setup the /etc/zfs/vdev_id.conf file?</a>
</th>
</tr>
<tr>
<td>
<p>In order to use /dev/disk/by-vdev/ naming the /etc/zfs/vdev_id.conf must be
configured. The format of this file is described in the vdev_id.conf
man page. Several examples follow.
<ul>
<li>A non-multipath configuration with direct-attached SAS enclosures
and an arbitrary slot re-mapping.
<pre>
multipath no
topology sas_direct
phys_per_port 4
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
# Linux Mapped
# Slot Slot
slot 0 2
slot 1 6
slot 2 0
slot 3 3
slot 4 5
slot 5 7
slot 6 4
slot 7 1
</pre>
<li>A SAS-switch topology. Note that the channel keyword takes only
two arguments in this example.
<pre>
topology sas_switch
# SWITCH PORT CHANNEL NAME
channel 1 A
channel 2 B
channel 3 C
channel 4 D
</pre>
<li>A multipath configuration. Note that channel names have multiple
definitions - one per physical path.
<pre>
multipath yes
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
channel 86:00.0 1 A
channel 86:00.0 0 B
</pre>
<li>A configuration using device link aliases.
<pre>
# by-vdev
# name fully qualified or base name of device link
alias d1 /dev/disk/by-id/wwn-0x5000c5002de3b9ca
alias d2 wwn-0x5000c5002def789e
</pre>
</ul>
<p>After defining the new disk names run <em>udevadm trigger</em> to
prompt udev to parse the configuration file. This will result in a new
/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX
names. Following the first example above, you could then create the new
pool of mirrors with the following command:</p>
<pre>
$ sudo zpool create tank \
mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
$ sudo zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
A0 ONLINE 0 0 0
B0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
A1 ONLINE 0 0 0
B1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
A2 ONLINE 0 0 0
B2 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
A3 ONLINE 0 0 0
B3 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
A4 ONLINE 0 0 0
B4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
A5 ONLINE 0 0 0
B5 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
A6 ONLINE 0 0 0
B6 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
A7 ONLINE 0 0 0
B7 ONLINE 0 0 0
errors: No known data errors
</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="PerformanceConsideration">
1.10 What’s going on with performance?</a>
</th>
</tr>
<tr>
<td>
<p>To achieve good performance with your pool there are some easy best
practices you should follow. Additionally, it should be made clear that
the ZFS on Linux implementation has not yet been optimized for performance.
As the project matures we can expect performance to improve.</p>
<ul>
<li>
<b>Evenly balance your disk across controllers:</b>
Often the limiting factor for performance is not the disk but the controller.
By balancing your disks evenly across controllers you can often improve
throughput.
</li>
<li>
<b>Create your pool using whole disks:</b>
When running <em>zpool create</em> use whole disk names. This will allow
ZFS to automatically partition the disk to ensure correct alignment. It
will also improve interoperability with other ZFS implementations which
honor the <em>wholedisk</em> property.
</li>
<li>
<b>Have enough memory:</b>
A minimum of 2GB of memory is recommended for ZFS. Additional memory is
strongly recommended when the compression and deduplication features are
enabled.
</li>
<li>
<b>Improve performance by setting ashift=12:</b>
You may be able to improve performance for some workloads by setting
ashift=12. This tuning can only be set when the pool is first created
and it will result in a decrease of capacity. For additional detail on why
you should set this option when using Advanced Format drives see section
<a href="#HowDoesZFSonLinuxHandlesAdvacedFormatDrives">
1.15 How does ZFS on Linux handles Advanced Format disks?</a>
</li>
</ul>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="WhatDoesZpoolCacheDo">
1.11 What does the /etc/zfs/zpool.cache file do?</a>
</th>
</tr>
<tr>
<td>
<p>Whenever a pool is imported in the system it will be added to the
/etc/zfs/zpool.cache file. This file stores pool configuration information
such as the vdev device names and the active pool state. If this file
exists when the ZFS modules are loaded then any pool listed in the cache
file will be automatically imported. When a pool is not listed in the
cache file it will need to be explicitly imported.
</p>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoISetupShares">
1.12 How do I setup an NFS or SMB shares?</a>
</th>
</tr>
<tr>
<td>
<p>ZFS has been integrated with the Linux NFS and SMB servers. You
can share a ZFS file system by setting the <em>sharenfs</em> or
<em>sharesmb</em> file system property. For example, to share the
file system tank/home via NFS and SMB with the default options.</p>
<pre>
$ sudo zfs set sharenfs=on tank/home
$ sudo zfs set sharesmb=on tank/home
</pre>
<p>Note you must still manually configure your network to allow NFS
of SMB. You will also need to make sure that the NFS and SMB packages
for your distribution are installed.</p>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="CanIBootFromZFS">
1.13 Can I boot from ZFS?</a>
</th>
</tr>
<tr>
<td>
Yes, numerous people have had success with this. However, because it
still requires the latest versions of grub and is distribution specific
we don't recommend it. Instead we suggest using ZFS as your root file
system. There are excellent walk through available for both
<a href="https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu
-to-a-Native-ZFS-Root-Filesystem">Ubuntu</a> and
<a href="https://github.com/pendor/gentoo-zfs-install/tree/master/install">
Gentoo</a>.
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoIAutomaticallyMountZFSFilesystemsDuringStartup">
1.14 How do I automatically mount ZFS file systems during startup?</a>
</th>
</tr>
<tr>
<td>
<ul>
<li>
<b>Ubuntu PPA:</b>
Auto mounting is provided by the enhanced mountall package from the
<a href="https://launchpad.net/~zfs-native/+archive/stable">ZFS PPA</a>.
Install the ubuntu-zfs package to get this feature.
</li>
<li>
<b>Fedora, RHEL, Arch, Gentoo, Lunar:</b>
Init scripts for these distributions have been provided. If your
distribution of choice isn't represented please submit an init script
modeled on one of these so we can include it.
</li>
</ul>
<p>Note that the SELinux policy for ZFS on Linux is not yet implemented.
This can lead to issues such as the init script failing to auto-mount the
filesystems when SELinux is set to enforce. The long term solution is to
add ZFS as a known filesystem type which supports xattrs to the default
SELinux policy. This is something which must be done by the upstream
Linux distribution. In the mean time, you can workaround this by setting
SELinux to permissive or disabled.</p>
<pre>
$ cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoesZFSonLinuxHandlesAdvacedFormatDrives">
1.15 How does ZFS on Linux handles Advanced Format disks?</a>
</th>
</tr>
<tr>
<td>
<p>Advanced Format (AF) is a new disk format which natively uses a 4,096
byte instead of 512 byte sector size. To maintain compatibility with legacy
systems AF disks emulate a sector size of 512 bytes. By default, ZFS will
automatically detect the sector size of the drive. This combination will
result in poorly aligned disk access which will greatly degrade the pool
performance.</p>
Therefore the ability to set the ashift property has been added to
the zpool command. This allows users to explicitly assign the sector
size at pool creation time. The ashift values range from 9 to 16 with
the default value 0 meaning auto-detect the sector size. This value is
actually a bit shift value, so an ashift value for 512 bytes is 9
(2<sup>9</sup> = 512) while the ashift value for 4,096 bytes is 12
(2<sup>12</sup> = 4,096). To force the pool to use 4,096 byte sectors
we must specify this at pool creation time:</p>
<pre>$ sudo zpool create -o ashift=12 tank mirror sda sdb</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="DoIHaveToUseECCMemory">
1.16 Do I have to use ECC memory for ZFS?</a>
</th>
</tr>
<tr>
<td>
<p>Using ECC memory for ZFS is strongly recommended for enterprise
environments where the strongest data integrity guarantees are required.
Without ECC memory rare random bit flips caused by cosmic rays or by
faulty memory can go undetected. If this were to occur ZFS (or any
other filesystem) will write the damaged data to disk and be unable to
automatically detect the corruption.</p>
<p>Unfortunately, ECC memory is not always supported by consumer grade
hardware. And even when it is ECC memory will be more expensive. For
home users the additional safety brought by ECC memory might not justify
the cost. It's up to you to determine what level of protection your
data requires.</p>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="CanIUseaZVOLforSwap">
1.17 Can I use a ZVOL for swap?</a>
</th>
</tr>
<tr>
<td>
<p>Yes. Just make sure you set the ZVOL block size to match your systems
page size, for x86_64 systems that is 4k. This tuning prevents ZFS from
having to perform read-modify-write options on a larger block while the
system is already low on memory.</p>
<pre>
$ sudo zfs create tank/swap -V 2G -b 4K
$ sudo mkswap -f /dev/tank/swap
$ sudo swapon /dev/tank/swap
</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowDoIgenerateTheZpoolCacheFile">
1.18 How do I generate the /etc/zfs/zpool.cache file?</a>
</th>
</tr>
<tr>
<td>
<p>The /etc/zfs/zpool.cache file will be automatically updated when your
pool configuration is changed. However, if for some reason it becomes
stale you can force the generation of a new /etc/zfs/zpool.cache file
by setting the <em>cachefile</em> property on the pool.</p>
<pre>
$ sudo zpool set cachefile=/etc/zfs/zpool.cache tank
</pre>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="CanIRunZFSForXenHypervisor">
1.19 Can I run ZFS on Xen Hypervisor or Xen Dom0?</a>
</th>
</tr>
<tr>
<td>
<p>Sure, but it is usually recommended to keep virtual machine storage and hypervisor pools, quite separate.
Although few people have managed to successfully deploy and run ZFS on the same machine configured as Dom0.
There are few caveats:
<ul>
<li>Set a fair amount of memory in grub.conf, dedicated to Dom0. e.g. <pre>dom0_mem=16384M,max:16384M</pre></li>
<li>Allocate no more of <em>30-40%</em> of Dom0's memory to ZFS in /etc/modprobe.d/zfs.conf. e.g. <pre>options zfs zfs_arc_max=6442450944</pre></li>
<li>Disable Xen's auto-ballooning in <pre>/etc/xen/xl.conf</pre></li>
<li>Watch out for any Xen bugs, such as <a href="http://lists.xen.org/archives/html/xen-devel/2012-12/msg01668.html" target="_blank">
this one</a> related to ballooning</li>
</ul>
For details, please see <a href="https://github.com/zfsonlinux/zfs/issues/1067" target="_blank">issue #1067</a> on GitHub.
</p>
</td>
</tr>
<tr bgcolor="#aaaaaa">
<th>
<a name="HowCanIHelp">
2.1 How can I help?</a>
</th>
</tr>
<tr>
<td>
<p>The most helpful thing you can do is to try ZFS on your Linux
system and report any
<a href="http://github.com/zfsonlinux/zfs/issues">issues</a>. If you
like what you see and would like to contribute to the project please send
<a href="http://github.com/behlendorf">me</a> an email. There are quite a few
open issues on the <a href="http://github.com/zfsonlinux/zfs/issues">issue
tracker</a> which need attention or if you have an idea of your own that is
fine too.</p>
</td>
</tr>
</table>
</center>
</body>
</html>