Discussion:
vdev_is_dead panic, NULL pointer of vd
Bob via illumos-zfs
2014-06-26 09:06:38 UTC
Permalink
Hi All,

We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing
displayed on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted by
mis-operations?

Thanks in advance.

1.
::stackregs
2.
ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3.
ffffff001f4ed740 vdev_readable+0x16(0)
4.
ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5.
ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6.
ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7.
ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8.
ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9.
ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10.
ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11.
ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12.
ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13.
ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14.
ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15.
ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16.
ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17.
ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0, 100003,
ffffff04fa67f0d8, ffffff001f4edde4)
18.
ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19.
ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20.
ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21.
ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23.
vdev_is_dead::dis
24.
vdev_is_dead: pushq %rbp
25.
vdev_is_dead+1: movq %rsp,%rbp
26.
vdev_is_dead+4: subq $0x8,%rsp
27.
vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28.
vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29.
vdev_is_dead+0x11: jb +0x1d <vdev_is_dead+0x30>
30.
vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31.
vdev_is_dead+0x1b: jne +0x13 <vdev_is_dead+0x30>
32.
vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8 <vdev_missing_ops>
33.
vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34.
vdev_is_dead+0x28: sete %al
35.
vdev_is_dead+0x2b: movzbl %al,%eax
36.
vdev_is_dead+0x2e: jmp +0x5 <vdev_is_dead+0x35>
37.
vdev_is_dead+0x30: movl $0x1,%eax
38.
vdev_is_dead+0x35: leave
39.
vdev_is_dead+0x36: ret

zdb info:

1.
zdb info:
2.
zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3.
--------------------------------------------
4.
LABEL 0
5.
--------------------------------------------
6.
version: 5000
7.
name: 'tank'
8.
state: 0
9.
txg: 166
10.
pool_guid: 9313074917079555431
11.
hostid: 12733532
12.
hostname: 'Anystorage83'
13.
top_guid: 3241563026677937076
14.
guid: 3241563026677937076
15.
vdev_children: 1
16.
vdev_tree:
17.
type: 'disk'
18.
id: 0
19.
guid: 3241563026677937076
20.
path: '/dev/dsk/c2t5000C50055DC8187d0s0'
21.
devid: 'id1,***@n5000c50055dc8187/a'
22.
phys_path: '/scsi_vhci/***@g5000c50055dc8187:a'
23.
whole_disk: 1
24.
metaslab_array: 33
25.
metaslab_shift: 34
26.
ashift: 9
27.
asize: 2000385474560
28.
is_log: 0
29.
create_txg: 4
30.
features_for_read:
31.
--------------------------------------------
32.
LABEL 1
33.
--------------------------------------------
34.
version: 5000
35.
name: 'tank'
36.
state: 0
37.
txg: 166
38.
pool_guid: 9313074917079555431
39.
hostid: 12733532
40.
hostname: 'Anystorage83'
41.
top_guid: 3241563026677937076
42.
guid: 3241563026677937076
43.
vdev_children: 1
44.
vdev_tree:
45.
type: 'disk'
46.
id: 0
47.
guid: 3241563026677937076
48.
path: '/dev/dsk/c2t5000C50055DC8187d0s0'
49.
devid: 'id1,***@n5000c50055dc8187/a'
50.
phys_path: '/scsi_vhci/***@g5000c50055dc8187:a'
51.
whole_disk: 1
52.
metaslab_array: 33
53.
metaslab_shift: 34
54.
ashift: 9
55.
asize: 2000385474560
56.
is_log: 0
57.
create_txg: 4
58.
features_for_read:
59.
--------------------------------------------
60.
LABEL 2
61.
--------------------------------------------
62.
version: 5000
63.
name: 'tank'
64.
state: 0
65.
txg: 166
66.
pool_guid: 9313074917079555431
67.
hostid: 12733532
68.
hostname: 'Anystorage83'
69.
top_guid: 3241563026677937076
70.
guid: 3241563026677937076
71.
vdev_children: 1
72.
vdev_tree:
73.
type: 'disk'
74.
id: 0
75.
guid: 3241563026677937076
76.
path: '/dev/dsk/c2t5000C50055DC8187d0s0'
77.
devid: 'id1,***@n5000c50055dc8187/a'
78.
phys_path: '/scsi_vhci/***@g5000c50055dc8187:a'
79.
whole_disk: 1
80.
metaslab_array: 33
81.
metaslab_shift: 34
82.
ashift: 9
83.
asize: 2000385474560
84.
is_log: 0
85.
create_txg: 4
86.
features_for_read:
87.
--------------------------------------------
88.
LABEL 3
89.
--------------------------------------------
90.
version: 5000
91.
name: 'tank'
92.
state: 0
93.
txg: 166
94.
pool_guid: 9313074917079555431
95.
hostid: 12733532
96.
hostname: 'Anystorage83'
97.
top_guid: 3241563026677937076
98.
guid: 3241563026677937076
99.
vdev_children: 1
100.
vdev_tree:
101.
type: 'disk'
102.
id: 0
103.
guid: 3241563026677937076
104.
path: '/dev/dsk/c2t5000C50055DC8187d0s0'
105.
devid: 'id1,***@n5000c50055dc8187/a'
106.
phys_path: '/scsi_vhci/***@g5000c50055dc8187:a'
107.
whole_disk: 1
108.
metaslab_array: 33
109.
metaslab_shift: 34
110.
ashift: 9
111.
asize: 2000385474560
112.
is_log: 0
113.
create_txg: 4
114.
features_for_read:
115.
116.
117.
zdb -e 9313074917079555431
118.
119.
Configuration for import:
120.
vdev_children: 1
121.
version: 5000
122.
pool_guid: 9313074917079555431
123.
name: 'tank'
124.
txg: 166
125.
state: 0
126.
hostid: 12733532
127.
hostname: 'Anystorage83'
128.
vdev_tree:
129.
type: 'root'
130.
id: 0
131.
guid: 9313074917079555431
132.
children[0]:
133.
type: 'disk'
134.
id: 0
135.
guid: 3241563026677937076
136.
phys_path: '/scsi_vhci/***@g5000c50055dc8187:a'
137.
whole_disk: 1
138.
metaslab_array: 33
139.
metaslab_shift: 34
140.
ashift: 9
141.
asize: 2000385474560
142.
is_log: 0
143.
create_txg: 4
144.
path: '/dev/dsk/c2t5000C50055DC8187d0s0'
145.
devid: 'id1,***@n5000c50055dc8187/a'
146.
Segmentation Fault (core dumped)






-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Matthew Ahrens via illumos-zfs
2014-06-30 17:00:11 UTC
Permalink
It may be that there is something wrong with the data on disk. Can you
make a crash dump available? (contact me off-list if you'd prefer)

--matt
Post by Bob via illumos-zfs
Hi All,
We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing displayed
on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted by
mis-operations?
Thanks in advance.
1. > ::stackregs
2. ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3. ffffff001f4ed740 vdev_readable+0x16(0)
4. ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5. ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6. ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7. ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8. ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9. ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10. ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11. ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12. ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13. ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14. ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15. ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16. ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17. ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
18. ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19. ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20. ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21. ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23. > vdev_is_dead::dis
24. vdev_is_dead: pushq %rbp
25. vdev_is_dead+1: movq %rsp,%rbp
26. vdev_is_dead+4: subq $0x8,%rsp
27. vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28. vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29. vdev_is_dead+0x11: jb +0x1d <vdev_is_dead+0x30>
30. vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31. vdev_is_dead+0x1b: jne +0x13 <vdev_is_dead+0x30>
32. vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8
<vdev_missing_ops>
33. vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34. vdev_is_dead+0x28: sete %al
35. vdev_is_dead+0x2b: movzbl %al,%eax
36. vdev_is_dead+0x2e: jmp +0x5 <vdev_is_dead+0x35>
37. vdev_is_dead+0x30: movl $0x1,%eax
38. vdev_is_dead+0x35: leave
39. vdev_is_dead+0x36: ret
2. zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3. --------------------------------------------
4. LABEL 0
5. --------------------------------------------
6. version: 5000
7. name: 'tank'
8. state: 0
9. txg: 166
10. pool_guid: 9313074917079555431
11. hostid: 12733532
12. hostname: 'Anystorage83'
13. top_guid: 3241563026677937076
14. guid: 3241563026677937076
15. vdev_children: 1
17. type: 'disk'
18. id: 0
19. guid: 3241563026677937076
20. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
23. whole_disk: 1
24. metaslab_array: 33
25. metaslab_shift: 34
26. ashift: 9
27. asize: 2000385474560
28. is_log: 0
29. create_txg: 4
31. --------------------------------------------
32. LABEL 1
33. --------------------------------------------
34. version: 5000
35. name: 'tank'
36. state: 0
37. txg: 166
38. pool_guid: 9313074917079555431
39. hostid: 12733532
40. hostname: 'Anystorage83'
41. top_guid: 3241563026677937076
42. guid: 3241563026677937076
43. vdev_children: 1
45. type: 'disk'
46. id: 0
47. guid: 3241563026677937076
48. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
51. whole_disk: 1
52. metaslab_array: 33
53. metaslab_shift: 34
54. ashift: 9
55. asize: 2000385474560
56. is_log: 0
57. create_txg: 4
59. --------------------------------------------
60. LABEL 2
61. --------------------------------------------
62. version: 5000
63. name: 'tank'
64. state: 0
65. txg: 166
66. pool_guid: 9313074917079555431
67. hostid: 12733532
68. hostname: 'Anystorage83'
69. top_guid: 3241563026677937076
70. guid: 3241563026677937076
71. vdev_children: 1
73. type: 'disk'
74. id: 0
75. guid: 3241563026677937076
76. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
79. whole_disk: 1
80. metaslab_array: 33
81. metaslab_shift: 34
82. ashift: 9
83. asize: 2000385474560
84. is_log: 0
85. create_txg: 4
87. --------------------------------------------
88. LABEL 3
89. --------------------------------------------
90. version: 5000
91. name: 'tank'
92. state: 0
93. txg: 166
94. pool_guid: 9313074917079555431
95. hostid: 12733532
96. hostname: 'Anystorage83'
97. top_guid: 3241563026677937076
98. guid: 3241563026677937076
99. vdev_children: 1
101. type: 'disk'
102. id: 0
103. guid: 3241563026677937076
104. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
107. whole_disk: 1
108. metaslab_array: 33
109. metaslab_shift: 34
110. ashift: 9
111. asize: 2000385474560
112. is_log: 0
113. create_txg: 4
115.
116.
117. zdb -e 9313074917079555431
118.
120. vdev_children: 1
121. version: 5000
122. pool_guid: 9313074917079555431
123. name: 'tank'
124. txg: 166
125. state: 0
126. hostid: 12733532
127. hostname: 'Anystorage83'
129. type: 'root'
130. id: 0
131. guid: 9313074917079555431
133. type: 'disk'
134. id: 0
135. guid: 3241563026677937076
137. whole_disk: 1
138. metaslab_array: 33
139. metaslab_shift: 34
140. ashift: 9
141. asize: 2000385474560
142. is_log: 0
143. create_txg: 4
144. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
146. Segmentation Fault (core dumped)
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Gordon Ross via illumos-zfs
2014-06-30 17:24:03 UTC
Permalink
I think I've seen this one. Do you see a ZFS vnode with its mode bits
having zero for the inode type bits?



On Mon, Jun 30, 2014 at 1:00 PM, Matthew Ahrens via illumos-zfs <
Post by Matthew Ahrens via illumos-zfs
It may be that there is something wrong with the data on disk. Can you
make a crash dump available? (contact me off-list if you'd prefer)
--matt
On Thu, Jun 26, 2014 at 2:06 AM, Bob via illumos-zfs <
Post by Bob via illumos-zfs
Hi All,
We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing displayed
on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted by
mis-operations?
Thanks in advance.
1. > ::stackregs
2. ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3. ffffff001f4ed740 vdev_readable+0x16(0)
4. ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5. ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6. ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7. ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8. ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9. ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10. ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11. ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12. ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13. ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14. ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15. ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16. ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17. ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
18. ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19. ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20. ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21. ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23. > vdev_is_dead::dis
24. vdev_is_dead: pushq %rbp
25. vdev_is_dead+1: movq %rsp,%rbp
26. vdev_is_dead+4: subq $0x8,%rsp
27. vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28. vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29. vdev_is_dead+0x11: jb +0x1d
<vdev_is_dead+0x30>
30. vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31. vdev_is_dead+0x1b: jne +0x13
<vdev_is_dead+0x30>
32. vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8
<vdev_missing_ops>
33. vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34. vdev_is_dead+0x28: sete %al
35. vdev_is_dead+0x2b: movzbl %al,%eax
36. vdev_is_dead+0x2e: jmp +0x5
<vdev_is_dead+0x35>
37. vdev_is_dead+0x30: movl $0x1,%eax
38. vdev_is_dead+0x35: leave
39. vdev_is_dead+0x36: ret
2. zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3. --------------------------------------------
4. LABEL 0
5. --------------------------------------------
6. version: 5000
7. name: 'tank'
8. state: 0
9. txg: 166
10. pool_guid: 9313074917079555431
11. hostid: 12733532
12. hostname: 'Anystorage83'
13. top_guid: 3241563026677937076
14. guid: 3241563026677937076
15. vdev_children: 1
17. type: 'disk'
18. id: 0
19. guid: 3241563026677937076
20. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
23. whole_disk: 1
24. metaslab_array: 33
25. metaslab_shift: 34
26. ashift: 9
27. asize: 2000385474560
28. is_log: 0
29. create_txg: 4
31. --------------------------------------------
32. LABEL 1
33. --------------------------------------------
34. version: 5000
35. name: 'tank'
36. state: 0
37. txg: 166
38. pool_guid: 9313074917079555431
39. hostid: 12733532
40. hostname: 'Anystorage83'
41. top_guid: 3241563026677937076
42. guid: 3241563026677937076
43. vdev_children: 1
45. type: 'disk'
46. id: 0
47. guid: 3241563026677937076
48. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
51. whole_disk: 1
52. metaslab_array: 33
53. metaslab_shift: 34
54. ashift: 9
55. asize: 2000385474560
56. is_log: 0
57. create_txg: 4
59. --------------------------------------------
60. LABEL 2
61. --------------------------------------------
62. version: 5000
63. name: 'tank'
64. state: 0
65. txg: 166
66. pool_guid: 9313074917079555431
67. hostid: 12733532
68. hostname: 'Anystorage83'
69. top_guid: 3241563026677937076
70. guid: 3241563026677937076
71. vdev_children: 1
73. type: 'disk'
74. id: 0
75. guid: 3241563026677937076
76. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
79. whole_disk: 1
80. metaslab_array: 33
81. metaslab_shift: 34
82. ashift: 9
83. asize: 2000385474560
84. is_log: 0
85. create_txg: 4
87. --------------------------------------------
88. LABEL 3
89. --------------------------------------------
90. version: 5000
91. name: 'tank'
92. state: 0
93. txg: 166
94. pool_guid: 9313074917079555431
95. hostid: 12733532
96. hostname: 'Anystorage83'
97. top_guid: 3241563026677937076
98. guid: 3241563026677937076
99. vdev_children: 1
101. type: 'disk'
102. id: 0
103. guid: 3241563026677937076
104. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
107. whole_disk: 1
108. metaslab_array: 33
109. metaslab_shift: 34
110. ashift: 9
111. asize: 2000385474560
112. is_log: 0
113. create_txg: 4
115.
116.
117. zdb -e 9313074917079555431
118.
120. vdev_children: 1
121. version: 5000
122. pool_guid: 9313074917079555431
123. name: 'tank'
124. txg: 166
125. state: 0
126. hostid: 12733532
127. hostname: 'Anystorage83'
129. type: 'root'
130. id: 0
131. guid: 9313074917079555431
133. type: 'disk'
134. id: 0
135. guid: 3241563026677937076
137. whole_disk: 1
138. metaslab_array: 33
139. metaslab_shift: 34
140. ashift: 9
141. asize: 2000385474560
142. is_log: 0
143. create_txg: 4
144. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
146. Segmentation Fault (core dumped)
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460> |
Modify <https://www.listbox.com/member/?&> Your Subscription
<http://www.listbox.com>
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/22050030-47af814e> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
--
Gordon Ross <***@nexenta.com>
Nexenta Systems, Inc. www.nexenta.com
Enterprise class storage for everyone



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Matthew Ahrens via illumos-zfs
2014-06-30 17:27:51 UTC
Permalink
Given that he is in the middle of spa_load(), I don't imagine any ZFS
vnodes/inodes have been instantiated yet.

--matt
Post by Gordon Ross via illumos-zfs
I think I've seen this one. Do you see a ZFS vnode with its mode bits
having zero for the inode type bits?
On Mon, Jun 30, 2014 at 1:00 PM, Matthew Ahrens via illumos-zfs <
Post by Matthew Ahrens via illumos-zfs
It may be that there is something wrong with the data on disk. Can you
make a crash dump available? (contact me off-list if you'd prefer)
--matt
On Thu, Jun 26, 2014 at 2:06 AM, Bob via illumos-zfs <
Post by Bob via illumos-zfs
Hi All,
We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing
displayed on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted by
mis-operations?
Thanks in advance.
1. > ::stackregs
2. ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3. ffffff001f4ed740 vdev_readable+0x16(0)
4. ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5. ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6. ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7. ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8. ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9. ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10. ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11. ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12. ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13. ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14. ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15. ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16. ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17. ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
18. ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06,
80424a0, 100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19. ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20. ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21. ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23. > vdev_is_dead::dis
24. vdev_is_dead: pushq %rbp
25. vdev_is_dead+1: movq %rsp,%rbp
26. vdev_is_dead+4: subq $0x8,%rsp
27. vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28. vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29. vdev_is_dead+0x11: jb +0x1d
<vdev_is_dead+0x30>
30. vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31. vdev_is_dead+0x1b: jne +0x13
<vdev_is_dead+0x30>
32. vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8
<vdev_missing_ops>
33. vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34. vdev_is_dead+0x28: sete %al
35. vdev_is_dead+0x2b: movzbl %al,%eax
36. vdev_is_dead+0x2e: jmp +0x5
<vdev_is_dead+0x35>
37. vdev_is_dead+0x30: movl $0x1,%eax
38. vdev_is_dead+0x35: leave
39. vdev_is_dead+0x36: ret
2. zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3. --------------------------------------------
4. LABEL 0
5. --------------------------------------------
6. version: 5000
7. name: 'tank'
8. state: 0
9. txg: 166
10. pool_guid: 9313074917079555431
11. hostid: 12733532
12. hostname: 'Anystorage83'
13. top_guid: 3241563026677937076
14. guid: 3241563026677937076
15. vdev_children: 1
17. type: 'disk'
18. id: 0
19. guid: 3241563026677937076
20. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
23. whole_disk: 1
24. metaslab_array: 33
25. metaslab_shift: 34
26. ashift: 9
27. asize: 2000385474560
28. is_log: 0
29. create_txg: 4
31. --------------------------------------------
32. LABEL 1
33. --------------------------------------------
34. version: 5000
35. name: 'tank'
36. state: 0
37. txg: 166
38. pool_guid: 9313074917079555431
39. hostid: 12733532
40. hostname: 'Anystorage83'
41. top_guid: 3241563026677937076
42. guid: 3241563026677937076
43. vdev_children: 1
45. type: 'disk'
46. id: 0
47. guid: 3241563026677937076
48. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
51. whole_disk: 1
52. metaslab_array: 33
53. metaslab_shift: 34
54. ashift: 9
55. asize: 2000385474560
56. is_log: 0
57. create_txg: 4
59. --------------------------------------------
60. LABEL 2
61. --------------------------------------------
62. version: 5000
63. name: 'tank'
64. state: 0
65. txg: 166
66. pool_guid: 9313074917079555431
67. hostid: 12733532
68. hostname: 'Anystorage83'
69. top_guid: 3241563026677937076
70. guid: 3241563026677937076
71. vdev_children: 1
73. type: 'disk'
74. id: 0
75. guid: 3241563026677937076
76. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
79. whole_disk: 1
80. metaslab_array: 33
81. metaslab_shift: 34
82. ashift: 9
83. asize: 2000385474560
84. is_log: 0
85. create_txg: 4
87. --------------------------------------------
88. LABEL 3
89. --------------------------------------------
90. version: 5000
91. name: 'tank'
92. state: 0
93. txg: 166
94. pool_guid: 9313074917079555431
95. hostid: 12733532
96. hostname: 'Anystorage83'
97. top_guid: 3241563026677937076
98. guid: 3241563026677937076
99. vdev_children: 1
101. type: 'disk'
102. id: 0
103. guid: 3241563026677937076
104. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
107. whole_disk: 1
108. metaslab_array: 33
109. metaslab_shift: 34
110. ashift: 9
111. asize: 2000385474560
112. is_log: 0
113. create_txg: 4
115.
116.
117. zdb -e 9313074917079555431
118.
120. vdev_children: 1
121. version: 5000
122. pool_guid: 9313074917079555431
123. name: 'tank'
124. txg: 166
125. state: 0
126. hostid: 12733532
127. hostname: 'Anystorage83'
129. type: 'root'
130. id: 0
131. guid: 9313074917079555431
133. type: 'disk'
134. id: 0
135. guid: 3241563026677937076
137. whole_disk: 1
138. metaslab_array: 33
139. metaslab_shift: 34
140. ashift: 9
141. asize: 2000385474560
142. is_log: 0
143. create_txg: 4
144. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
146. Segmentation Fault (core dumped)
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460> |
Modify <https://www.listbox.com/member/?&> Your Subscription
<http://www.listbox.com>
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/22050030-47af814e> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
--
Nexenta Systems, Inc. www.nexenta.com
Enterprise class storage for everyone
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Gordon Ross via illumos-zfs
2014-06-30 17:45:17 UTC
Permalink
Oh, yeah, I was thinking of another similarly named function. Never mind :)
Post by Matthew Ahrens via illumos-zfs
Given that he is in the middle of spa_load(), I don't imagine any ZFS
vnodes/inodes have been instantiated yet.
--matt
Post by Gordon Ross via illumos-zfs
I think I've seen this one. Do you see a ZFS vnode with its mode bits
having zero for the inode type bits?
On Mon, Jun 30, 2014 at 1:00 PM, Matthew Ahrens via illumos-zfs <
Post by Matthew Ahrens via illumos-zfs
It may be that there is something wrong with the data on disk. Can you
make a crash dump available? (contact me off-list if you'd prefer)
--matt
On Thu, Jun 26, 2014 at 2:06 AM, Bob via illumos-zfs <
Post by Bob via illumos-zfs
Hi All,
We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing
displayed on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted
by mis-operations?
Thanks in advance.
1. > ::stackregs
2. ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3. ffffff001f4ed740 vdev_readable+0x16(0)
4. ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5. ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6. ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7. ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8. ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9. ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10. ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11. ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12. ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13. ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14. ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15. ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16. ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17. ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
18. ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06,
80424a0, 100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19. ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06,
80424a0, 100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20. ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21. ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23. > vdev_is_dead::dis
24. vdev_is_dead: pushq %rbp
25. vdev_is_dead+1: movq %rsp,%rbp
26. vdev_is_dead+4: subq $0x8,%rsp
27. vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28. vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29. vdev_is_dead+0x11: jb +0x1d
<vdev_is_dead+0x30>
30. vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31. vdev_is_dead+0x1b: jne +0x13
<vdev_is_dead+0x30>
32. vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8
<vdev_missing_ops>
33. vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34. vdev_is_dead+0x28: sete %al
35. vdev_is_dead+0x2b: movzbl %al,%eax
36. vdev_is_dead+0x2e: jmp +0x5
<vdev_is_dead+0x35>
37. vdev_is_dead+0x30: movl $0x1,%eax
38. vdev_is_dead+0x35: leave
39. vdev_is_dead+0x36: ret
2. zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3. --------------------------------------------
4. LABEL 0
5. --------------------------------------------
6. version: 5000
7. name: 'tank'
8. state: 0
9. txg: 166
10. pool_guid: 9313074917079555431
11. hostid: 12733532
12. hostname: 'Anystorage83'
13. top_guid: 3241563026677937076
14. guid: 3241563026677937076
15. vdev_children: 1
17. type: 'disk'
18. id: 0
19. guid: 3241563026677937076
20. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
23. whole_disk: 1
24. metaslab_array: 33
25. metaslab_shift: 34
26. ashift: 9
27. asize: 2000385474560
28. is_log: 0
29. create_txg: 4
31. --------------------------------------------
32. LABEL 1
33. --------------------------------------------
34. version: 5000
35. name: 'tank'
36. state: 0
37. txg: 166
38. pool_guid: 9313074917079555431
39. hostid: 12733532
40. hostname: 'Anystorage83'
41. top_guid: 3241563026677937076
42. guid: 3241563026677937076
43. vdev_children: 1
45. type: 'disk'
46. id: 0
47. guid: 3241563026677937076
48. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
51. whole_disk: 1
52. metaslab_array: 33
53. metaslab_shift: 34
54. ashift: 9
55. asize: 2000385474560
56. is_log: 0
57. create_txg: 4
59. --------------------------------------------
60. LABEL 2
61. --------------------------------------------
62. version: 5000
63. name: 'tank'
64. state: 0
65. txg: 166
66. pool_guid: 9313074917079555431
67. hostid: 12733532
68. hostname: 'Anystorage83'
69. top_guid: 3241563026677937076
70. guid: 3241563026677937076
71. vdev_children: 1
73. type: 'disk'
74. id: 0
75. guid: 3241563026677937076
76. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
79. whole_disk: 1
80. metaslab_array: 33
81. metaslab_shift: 34
82. ashift: 9
83. asize: 2000385474560
84. is_log: 0
85. create_txg: 4
87. --------------------------------------------
88. LABEL 3
89. --------------------------------------------
90. version: 5000
91. name: 'tank'
92. state: 0
93. txg: 166
94. pool_guid: 9313074917079555431
95. hostid: 12733532
96. hostname: 'Anystorage83'
97. top_guid: 3241563026677937076
98. guid: 3241563026677937076
99. vdev_children: 1
101. type: 'disk'
102. id: 0
103. guid: 3241563026677937076
104. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
107. whole_disk: 1
108. metaslab_array: 33
109. metaslab_shift: 34
110. ashift: 9
111. asize: 2000385474560
112. is_log: 0
113. create_txg: 4
115.
116.
117. zdb -e 9313074917079555431
118.
120. vdev_children: 1
121. version: 5000
122. pool_guid: 9313074917079555431
123. name: 'tank'
124. txg: 166
125. state: 0
126. hostid: 12733532
127. hostname: 'Anystorage83'
129. type: 'root'
130. id: 0
131. guid: 9313074917079555431
133. type: 'disk'
134. id: 0
135. guid: 3241563026677937076
:a'
137. whole_disk: 1
138. metaslab_array: 33
139. metaslab_shift: 34
140. ashift: 9
141. asize: 2000385474560
142. is_log: 0
143. create_txg: 4
144. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
146. Segmentation Fault (core dumped)
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460>
| Modify <https://www.listbox.com/member/?&> Your Subscription
<http://www.listbox.com>
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/22050030-47af814e> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
--
Nexenta Systems, Inc. www.nexenta.com
Enterprise class storage for everyone
--
Gordon Ross <***@nexenta.com>
Nexenta Systems, Inc. www.nexenta.com
Enterprise class storage for everyone



-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com
Matthew Ahrens via illumos-zfs
2014-07-01 17:36:32 UTC
Permalink
The problem is that there are at least 3 devices in the pool, but the pool
config that was passed in from userland has only one device.
ffffff04ffb69ea0::print zio_t io_bp|::blkptr
DVA[0]=<0:40e00:800>
DVA[1]=<1:7b800:800>
DVA[2]=<2:4c000:800>
[L0 OBJSET] FLETCHER_4 OFF LE contiguous unique triple
size=800L/800P birth=2288L/2288P fill=81
cksum=aa3106428:1265ceba38fd:10005d6be71157:95492e9d30e35f6

Note that its 3 DVAs are on VDEV ID's 0, 1, 2.
ffffff0501638eb8::nvlist
vdev_children=0000000000000001
version=0000000000001388
pool_guid=813eb04b57209d67
name='tank'
state=0000000000000000
hostid=0000000000c24c5c
hostname='Anystorage83'
vdev_tree
type='root'
id=0000000000000000
guid=813eb04b57209d67
children[0]
type='disk'
id=0000000000000000
guid=2cfc5865f22db3b4
phys_path='/scsi_vhci/***@g5000c50055dc8187:a'
whole_disk=0000000000000001
metaslab_array=0000000000000021
metaslab_shift=0000000000000022
ashift=0000000000000009
asize=000001d1c0440000
is_log=0000000000000000
create_txg=0000000000000004
path='/dev/dsk/c2t5000C50055DC8187d0s0'
devid='id1,***@n5000c50055dc8187/a'

Does that match with your understanding of the pool layout, that it should
have at least 3 top-level VDEVs? Are they all still present on the system?
I'm not sure exactly how userland generates this config. Could you also
send the output of "zdb -l /dev/dsk/..." for each of the devices that
should be part of the pool?

--matt
Hi All,
We met a coredump with following call stacks, and actually the coredump
seems caused by a DISK, if we plugin the disk, and trigger a 'zpool import',
without any addtional args, the system just coredumped, nothing displayed
on console, but below dump info after reboot.
what happened in this case? Is it a bug or the on disk data corrupted by
mis-operations?
Thanks in advance.
1. > ::stackregs
2. ffffff001f4ed720 vdev_is_dead+0xc(*0*)
3. ffffff001f4ed740 vdev_readable+0x16(0)
4. ffffff001f4ed780 vdev_mirror_child_select+0x61(ffffff04ffb69ea0)
5. ffffff001f4ed7c0 vdev_mirror_io_start+0xda(ffffff04ffb69ea0)
6. ffffff001f4ed800 zio_vdev_io_start+0x20a(ffffff04ffb69ea0)
7. ffffff001f4ed830 zio_execute+0x8d(ffffff04ffb69ea0)
8. ffffff001f4ed870 zio_wait+0x32(ffffff04ffb69ea0)
9. ffffff001f4ed910 arc_read+0x893(0, ffffff05013da040,
ffffff04f03e4400, fffffffff79b6798, ffffff05018acc50, 0, 40,
ffffff001f4ed94c, ffffff001f4ed950)
10. ffffff001f4ed9c0 dmu_objset_open_impl+0xe7(ffffff05013da040, 0,
ffffff04f03e4400, ffffff04f03e43c8)
11. ffffff001f4eda10 dsl_pool_init+0x40(ffffff05013da040, 8f1,
ffffff05013da328)
12. ffffff001f4edb00 spa_load_impl+0x585(ffffff05013da040,
813eb04b57209d67, ffffff0501638eb8, 3, 0, 1, ffffff001f4edb20)
13. ffffff001f4edb90 spa_load+0x15c(ffffff05013da040, 3, 0, 1)
14. ffffff001f4edbe0 spa_tryimport+0x97(ffffff05013c0268)
15. ffffff001f4edc20 zfs_ioc_pool_tryimport+0x45(ffffff04ffb61000)
16. ffffff001f4edcc0 zfsdev_ioctl+0x327(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
17. ffffff001f4edd00 cdev_ioctl+0x45(b500000000, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4)
18. ffffff001f4edd40 spec_ioctl+0x5a(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
19. ffffff001f4eddc0 fop_ioctl+0x7b(ffffff04ea72e100, 5a06, 80424a0,
100003, ffffff04fa67f0d8, ffffff001f4edde4, 0)
20. ffffff001f4edec0 ioctl+0x18e(3, 5a06, 80424a0)
21. ffffff001f4edf10 _sys_sysenter_post_swapgs+0x149()
22.
23. > vdev_is_dead::dis
24. vdev_is_dead: pushq %rbp
25. vdev_is_dead+1: movq %rsp,%rbp
26. vdev_is_dead+4: subq $0x8,%rsp
27. vdev_is_dead+8: movq %rdi,-0x8(%rbp)
28. vdev_is_dead+0xc: cmpq $0x6,0x40(%rdi)
29. vdev_is_dead+0x11: jb +0x1d <vdev_is_dead+0x30>
30. vdev_is_dead+0x13: cmpq $0x0,0x418(%rdi)
31. vdev_is_dead+0x1b: jne +0x13 <vdev_is_dead+0x30>
32. vdev_is_dead+0x1d: leaq +0x43099ec(%rip),%r8
<vdev_missing_ops>
33. vdev_is_dead+0x24: cmpq 0x50(%rdi),%r8
34. vdev_is_dead+0x28: sete %al
35. vdev_is_dead+0x2b: movzbl %al,%eax
36. vdev_is_dead+0x2e: jmp +0x5 <vdev_is_dead+0x35>
37. vdev_is_dead+0x30: movl $0x1,%eax
38. vdev_is_dead+0x35: leave
39. vdev_is_dead+0x36: ret
2. zdb -l /dev/rdsk/c2t5000C50055DC8187d0s0
3. --------------------------------------------
4. LABEL 0
5. --------------------------------------------
6. version: 5000
7. name: 'tank'
8. state: 0
9. txg: 166
10. pool_guid: 9313074917079555431
11. hostid: 12733532
12. hostname: 'Anystorage83'
13. top_guid: 3241563026677937076
14. guid: 3241563026677937076
15. vdev_children: 1
17. type: 'disk'
18. id: 0
19. guid: 3241563026677937076
20. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
23. whole_disk: 1
24. metaslab_array: 33
25. metaslab_shift: 34
26. ashift: 9
27. asize: 2000385474560
28. is_log: 0
29. create_txg: 4
31. --------------------------------------------
32. LABEL 1
33. --------------------------------------------
34. version: 5000
35. name: 'tank'
36. state: 0
37. txg: 166
38. pool_guid: 9313074917079555431
39. hostid: 12733532
40. hostname: 'Anystorage83'
41. top_guid: 3241563026677937076
42. guid: 3241563026677937076
43. vdev_children: 1
45. type: 'disk'
46. id: 0
47. guid: 3241563026677937076
48. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
51. whole_disk: 1
52. metaslab_array: 33
53. metaslab_shift: 34
54. ashift: 9
55. asize: 2000385474560
56. is_log: 0
57. create_txg: 4
59. --------------------------------------------
60. LABEL 2
61. --------------------------------------------
62. version: 5000
63. name: 'tank'
64. state: 0
65. txg: 166
66. pool_guid: 9313074917079555431
67. hostid: 12733532
68. hostname: 'Anystorage83'
69. top_guid: 3241563026677937076
70. guid: 3241563026677937076
71. vdev_children: 1
73. type: 'disk'
74. id: 0
75. guid: 3241563026677937076
76. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
79. whole_disk: 1
80. metaslab_array: 33
81. metaslab_shift: 34
82. ashift: 9
83. asize: 2000385474560
84. is_log: 0
85. create_txg: 4
87. --------------------------------------------
88. LABEL 3
89. --------------------------------------------
90. version: 5000
91. name: 'tank'
92. state: 0
93. txg: 166
94. pool_guid: 9313074917079555431
95. hostid: 12733532
96. hostname: 'Anystorage83'
97. top_guid: 3241563026677937076
98. guid: 3241563026677937076
99. vdev_children: 1
101. type: 'disk'
102. id: 0
103. guid: 3241563026677937076
104. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
107. whole_disk: 1
108. metaslab_array: 33
109. metaslab_shift: 34
110. ashift: 9
111. asize: 2000385474560
112. is_log: 0
113. create_txg: 4
115.
116.
117. zdb -e 9313074917079555431
118.
120. vdev_children: 1
121. version: 5000
122. pool_guid: 9313074917079555431
123. name: 'tank'
124. txg: 166
125. state: 0
126. hostid: 12733532
127. hostname: 'Anystorage83'
129. type: 'root'
130. id: 0
131. guid: 9313074917079555431
133. type: 'disk'
134. id: 0
135. guid: 3241563026677937076
137. whole_disk: 1
138. metaslab_array: 33
139. metaslab_shift: 34
140. ashift: 9
141. asize: 2000385474560
142. is_log: 0
143. create_txg: 4
144. path: '/dev/dsk/c2t5000C50055DC8187d0s0'
146. Segmentation Fault (core dumped)
*illumos-zfs* | Archives
<https://www.listbox.com/member/archive/182191/=now>
<https://www.listbox.com/member/archive/rss/182191/21635000-ebd1d460> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription <http://www.listbox.com>
-------------------------------------------
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/23047029-187a0c8d
Modify Your Subscription: https://www.listbox.com/member/?member_id=23047029&id_secret=23047029-2e85923f
Powered by Listbox: http://www.listbox.com

Loading...