Quantcast
Channel: VMware Communities : Unanswered Discussions - ESXi
Viewing all 4823 articles
Browse latest View live

Storage related query

$
0
0

Hi all,

 

I have two different storage platform Dell 3par and Nimble HF40. it is possible to connect both of them through iSCSI on ESXi host 6.7.


USB / External Drive can't see in esxi server terminal

$
0
0

Hello,

 

I'm try to add usb/external drive as datastore, and base on my research after turning off the usbarbirator the usb should see in the list below.
But in my scenario I can't see either one of them. I also added usb.log. I'm a newbie with ESXi so I'm not really sure where to look at.

 

Client Version: 1.25.0

ESXi Version: 6.7.0

image1.PNG

image2.PNG

 

usb.log

 

2020-02-17T15:20:01Z usbarb[2101226]: USBArb: new socket connection estibalished: socket 10

2020-02-17T15:20:01Z usbarb[2101226]: USBArb: new client F973D967D0 created, socket 10 added to poll queue

2020-02-17T15:20:01Z usbarb[2101226]: USBArb: Client 2099078 connected (version: 7)

2020-02-17T15:20:09Z usbarb[2101226]: USBArb: new socket connection estibalished: socket 11

2020-02-17T15:20:09Z usbarb[2101226]: USBArb: new client F973D96AF0 created, socket 11 added to poll queue

2020-02-17T15:20:09Z usbarb[2101226]: USBArb: Client 2099827 connected (version: 7)

2020-02-17T15:20:13Z usbarb[2101226]: USBArb: new socket connection estibalished: socket 12

2020-02-17T15:20:13Z usbarb[2101226]: USBArb: new client F973D96BF0 created, socket 12 added to poll queue

2020-02-17T15:20:13Z usbarb[2101226]: USBArb: Client 2099828 connected (version: 7)

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: pipe 11 closed by client F9B4F989A0

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: removing client F973D96AF0: pid=2099827, pipe=11

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: Client 2099827 disconnected

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: pipe 12 closed by client F9B4F989A0

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: removing client F973D96BF0: pid=2099828, pipe=12

2020-02-17T15:22:06Z usbarb[2101226]: USBArb: Client 2099828 disconnected

2020-02-17T15:22:10Z usbarb[2101226]: USBArb: pipe 10 closed by client F9B4F989A0

2020-02-17T15:22:10Z usbarb[2101226]: USBArb: removing client F973D967D0: pid=2099078, pipe=10

2020-02-17T15:22:10Z usbarb[2101226]: USBArb: Client 2099078 disconnected

2020-02-17T15:26:05Z mark: storage-path-claim-completed

2020-02-17T15:26:08Z usbarb[2098893]: VTHREAD 484834657856 "usbArb" wid 2098893

2020-02-17T15:26:08Z usbarb[2098893]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.

2020-02-17T15:26:08Z usbarb[2098893]: DICT --- GLOBAL SETTINGS /usr/lib/vmware/settings

2020-02-17T15:26:08Z usbarb[2098893]: DICT --- NON PERSISTENT (null)

2020-02-17T15:26:08Z usbarb[2098893]: DICT --- HOST DEFAULTS /etc/vmware/config

2020-02-17T15:26:08Z usbarb[2098893]: DICT                    libdir = "/usr/lib/vmware"

2020-02-17T15:26:08Z usbarb[2098893]: DICT           authd.proxy.nfc = "vmware-hostd:ha-nfc"

2020-02-17T15:26:08Z usbarb[2098893]: DICT        authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"

2020-02-17T15:26:08Z usbarb[2098893]: DICT   authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"

2020-02-17T15:26:08Z usbarb[2098893]: DICT      authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"

2020-02-17T15:26:08Z usbarb[2098893]: DICT            authd.fullpath = "/sbin/authd"

2020-02-17T15:26:08Z usbarb[2098893]: DICT --- SITE DEFAULTS /usr/lib/vmware/config

2020-02-17T15:26:08Z usbarb[2098893]: VMware USB Arbitration Service Version 17.2.1

2020-02-17T15:26:08Z usbarb[2098893]: USBGL: opened '/vmfs/devices/char/vmkdriver/usbdevices' (fd=8) for usb device enumeration

2020-02-17T15:26:08Z usbarb[2098893]: USBArb: Attempting to connect to existing arbitrator on /var/run/vmware/usbarbitrator-socket.

2020-02-17T15:26:08Z usbarb[2098893]: SOCKET creating new socket, connecting to /var/run/vmware/usbarbitrator-socket

2020-02-17T15:26:08Z usbarb[2098893]: SOCKET connect failed, error 2: No such file or directory

2020-02-17T15:26:08Z usbarb[2098893]: USBArb: Failed to connect to the existing arbitrator.

2020-02-17T15:26:08Z usbarb[2098893]: USBArb: listening socket 9 created successfully

2020-02-17T15:26:08Z usbarb[2098893]: USBArb: adding listening socket 9 to poll queue

2020-02-17T15:26:08Z usbarb[2098893]: USBGL: usb device change detected: start enumeration.

2020-02-17T15:26:24Z usbarb[2098893]: USBArb: new socket connection estibalished: socket 10

2020-02-17T15:26:24Z usbarb[2098893]: USBArb: new client 709FFFD7D0 created, socket 10 added to poll queue

2020-02-17T15:26:24Z usbarb[2098893]: USBArb: Client 2099147 connected (version: 7)

2020-02-17T15:26:31Z usbarb[2098893]: USBArb: new socket connection estibalished: socket 11

2020-02-17T15:26:31Z usbarb[2098893]: USBArb: new client 709FFFDAF0 created, socket 11 added to poll queue

2020-02-17T15:26:32Z usbarb[2098893]: USBArb: Client 2099907 connected (version: 7)

2020-02-17T15:26:38Z usbarb[2098893]: USBArb: new socket connection estibalished: socket 12

2020-02-17T15:26:38Z usbarb[2098893]: USBArb: new client 709FFFDBF0 created, socket 12 added to poll queue

2020-02-17T15:26:38Z usbarb[2098893]: USBArb: Client 2099906 connected (version: 7)

2020-02-17T15:40:21Z mark: storage-path-claim-completed

2020-02-17T17:06:02Z mark: storage-path-claim-completed

2020-02-17T17:13:52Z usbarb[2101301]: VTHREAD 1066572502592 "usbArb" wid 2101301

2020-02-17T17:13:52Z usbarb[2101301]: DictionaryLoad: Cannot open file "/usr/lib/vmware/config": No such file or directory.

2020-02-17T17:13:52Z usbarb[2101301]: DICT --- GLOBAL SETTINGS /usr/lib/vmware/settings

2020-02-17T17:13:52Z usbarb[2101301]: DICT --- NON PERSISTENT (null)

2020-02-17T17:13:52Z usbarb[2101301]: DICT --- HOST DEFAULTS /etc/vmware/config

2020-02-17T17:13:52Z usbarb[2101301]: DICT                    libdir = "/usr/lib/vmware"

2020-02-17T17:13:52Z usbarb[2101301]: DICT           authd.proxy.nfc = "vmware-hostd:ha-nfc"

2020-02-17T17:13:52Z usbarb[2101301]: DICT        authd.proxy.nfcssl = "vmware-hostd:ha-nfcssl"

2020-02-17T17:13:52Z usbarb[2101301]: DICT   authd.proxy.vpxa-nfcssl = "vmware-vpxa:vpxa-nfcssl"

2020-02-17T17:13:52Z usbarb[2101301]: DICT      authd.proxy.vpxa-nfc = "vmware-vpxa:vpxa-nfc"

2020-02-17T17:13:52Z usbarb[2101301]: DICT            authd.fullpath = "/sbin/authd"

2020-02-17T17:13:52Z usbarb[2101301]: DICT --- SITE DEFAULTS /usr/lib/vmware/config

2020-02-17T17:13:52Z usbarb[2101301]: VMware USB Arbitration Service Version 17.2.1

2020-02-17T17:13:52Z usbarb[2101301]: USBGL: opened '/vmfs/devices/char/vmkdriver/usbdevices' (fd=8) for usb device enumeration

2020-02-17T17:13:52Z usbarb[2101301]: USBArb: Attempting to connect to existing arbitrator on /var/run/vmware/usbarbitrator-socket.

2020-02-17T17:13:52Z usbarb[2101301]: SOCKET creating new socket, connecting to /var/run/vmware/usbarbitrator-socket

2020-02-17T17:13:52Z usbarb[2101301]: SOCKET connect failed, error 2: No such file or directory

2020-02-17T17:13:52Z usbarb[2101301]: USBArb: Failed to connect to the existing arbitrator.

2020-02-17T17:13:52Z usbarb[2101301]: USBArb: listening socket 9 created successfully

2020-02-17T17:13:52Z usbarb[2101301]: USBArb: adding listening socket 9 to poll queue

2020-02-17T17:13:52Z usbarb[2101301]: USBGL: usb device change detected: start enumeration.

 

Thanks in advance for the help.

ixgben: indrv_GetPcieErrorInfo:508: Number of register offsets is zero

$
0
0

Hi,

I couldn't find anything about the below problems on HPE or VMware:

 

ixgben: indrv_GetPcieErrorInfo:508: Number of register offsets is zero

 

There are some NIC which disconnected from network and there is no cable but I'm facing with some alerts about up and down NIC.

Would you please help to resolve the issue if there is same experiences on HPE DL380G9 and vSphere 6.7 U3.

The issue was happening on vSphere 6 U3 as well.

Multipath Black List

$
0
0

Hi,

Is there any way to mark device as non-multipath device in ESXi?

There are many logs about nmp_ResetDeviceLogThrottling for local disks and CDROM.

Motherboard/CPU upgrade - reinstallation needed?

$
0
0

As per title I'm upgrading my old tyan motherboards to a more recent version but different  brand; Supermicro.

in both cases the CPU are opteron.

 

What would it be the best way (quicker) to move the ESXi 6.5 standalone installation from one motherboard to the new one?

The controller and RAID will still be the same untouched.

 

I guess reinstalling is a must e.g. doubt I just moe the disk and it will work as it is... if so can I just perform something like upgrade so without really touching the config?

Or do I really have to go through the pain to reinstall from scratch?

 

Thanks!

Help with VMWARE 5.1 Crashes on startup

$
0
0

our server couldnt last one more week for our new one to come in to migrate / upgrade.. getting the following error message.. tried calling VMWARE but apparently they dont want to help take my money... anyone that can offer any advice i would greatly owe ya!

Error .png

No Storage found

$
0
0

Hi,

i have setup ESXi 6.7 with 2 Datastores, installed a client machine.

Then i attached a RDX-SATA device. Because it was not reported in Manage->Hardware I probably made the following mistake.

I assumed that the Device "Hewlett-Packard Company Smart Array P410i" is the newly attached RDX but it was the RAID-Controler and switched it to Passthrough.

After a restart both Storages disappeared. Simply "Toggle passthrough" back to "Not capable" is not working because after the reboot it is still Active!

In a shell ls /dev/disks/ no disk is listed.

 

How can i turn back this setting? Please Help!

ESXI connection drops while transferring file

$
0
0

Hello,

   I am having an issue with ESXI 6.5 U3 dropping off of the network during a file transfer. The board I am using has a 4 integrated ports controlled by two Intel 82754L controllers.  I have no VMs setup as i am trying to upload the ISO files.

 

I created a datastore on the raid 1 drive.  I can transfer files up to about 5mb just fine but if I try to go above that the ESXI will drop off the network.  I created another datastore on one of the 3tb drives and i can transfer a file 50mb but larger that that drops ESXI off the network.  Usually ESXI is still responsive through IPMI but sometime it freezes as well.  A restart fixes the problem until i try to upload a "large" file again.

 

This happens on both the web client and using SFTP.

 

I have tried several of the network ports on the motherboard with the same result.

 

Hardware:

     Version: ESXI 6.5 U3 on Sandisk USB

     Motherboard: Supermicro X8SIE-LN4F

     Processor: Xeon x3460

     LSI 9260-4i with two 2.5 500gb 5200PRM drives in raid 1

     2 x Seagate 3tb drives on MB SATA controller

 

I found a couple of other threads but they were either hitting a 4GB limit in IE or upgraded to version 4.4 and it fixed the issue.

 

Thoughts?

 

Thanks,

Bob


MacPro Rack 2019 & ESXI 6.7 U3 (full patched) & DarwinPanic: panic(cpu 1 caller 0xffffff7f83b1db8d): "DSMOS: SMC read error K0: 133"@/BuildRoot/Library/Caches/com.apple.xbs/Sources/DontStealMacOS/DontStealMacOS-30.0.1/Dont_Steal_MacOS.cpp:191

$
0
0

Hi,

 

have here a brand new MacPro 2019 running with VSphere 6.7 U3 (latest) and try to get run our OSX VMs on the Mac.

We have currently MacPros 2012 with ESXi 6.5 (latest) to host our VMs and would like to migrate to newer ESXis.

 

This is a fine machine and works very well (with two Intel quad 10Gbit nics and a Samsung 1,6 TG NVME PCIe card).

It's superb to have so much PCIe slots and memory at a single Intel 16 core.

 

The Linux guests are okay, but I get this message, running an Apple MacOS 10.15 (even with 10.14) on a Apple MacPro 2019.

 

Do you have any (beta) update to bring the right SMC call simulation this machine?

 

Thanks a lot in advance

 

Henri

HARD DISK DEGRADED WHEN AFTER INSTALL ESXI ON CISCO UCS C220M3 SERVER

$
0
0

HARD DISK DEGRADED WHEN AFTER INSTALL ESXI ON CISCO UCS C220M3 SERVER

disk expand

$
0
0

I have one standalone esxi 5.1 and vm's working on it. server owner says there is no disk space on linux server .

I checked the linux server, but I couldn't figure out which 6 discs are connected on vm.  which one is full. How can I find out which disk should I expand.

 

rvtools report about linux vm . also I added the virtual machine picture. how to find out which disk should I expand

 

thanks for helping

 

        

VMPowerstateTemplateDiskCapacity MBConsumed MBFree MBFree %
abc-db-07poweredOnFalse/tmp8.336.9227.585.539751.3839
abc-db-07poweredOnFalse/boot47611636075
abc-db-07poweredOnFalse/var/tmp8.336.9227.585.539751.3839
abc-db-07poweredOnFalse/8.336.9227.585.539751.3839
VMPowerstateTemplateDiskCapacity MB
abc-db-07poweredOnFalseHard disk 32.097.151
abc-db-07poweredOnFalseHard disk 62.097.151
abc-db-07poweredOnFalseHard disk 2307.200
abc-db-07poweredOnFalseHard disk 41.572.864
abc-db-07poweredOnFalseHard disk 52.097.151
abc-db-07poweredOnFalseHard disk 1307.200

ESXI SSD Host Cache down to 5% avail

$
0
0

Hi all,

 

While I do know my way around VMWare, I'm not particularly up to par on SAN/CACHE, etc.

 

So we have this Host. The host has a couple of datastores for the whopping 2 VM's on there (it's old and on the way out).

 

The SSD drive devoted as Host Cache is 256gb

 

However, started recently we've been receiving alerts that there is only 5% remaining out of the 256gb.

 

What is strange to me is the very first file in there is sysSwap-hc-50f94172-620a-3548-9909-001e676b8295.swp which weighs in at 1.48GB. Not anything to write home to mom about. Except, making note of the modification date being 1/18/2020 at 12:07:18. More to come later

 

Now, at the root level where the file listed above is located, there is also a folder called 50f94172-620a-3548-9909-001e676b8295 but with no modification date on this one.

 

When you drill down to that folder, there is only one folder under it labeled hostCache and again, has no modification date.

 

Drive in to HostCache and that's when the world of fun starts. Be ready, lot's to copy and paste. But as a TL:DR, please note the modification date.

 

Where would I even begin to figure out what these are for, why they are there, how to clean them up, etc.

 

lls-109.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/21/2020 4:13:18 AM

 

lls-95.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-96.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-97.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-98.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-99.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-100.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-101.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-102.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-103.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-104.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-105.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-106.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-107.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-108.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:18 AM

 

lls-60.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-61.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-62.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-63.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-64.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-65.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-66.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-67.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-68.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-69.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-70.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-71.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-72.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-73.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-74.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-75.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-76.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-77.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-78.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-79.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-80.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-81.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-82.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-83.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-84.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-85.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-86.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-87.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-88.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-89.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-90.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-91.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-92.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-93.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-94.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:17 AM

 

lls-24.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-25.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-26.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-27.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-28.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-29.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-30.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-31.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-32.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-33.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-34.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-35.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-36.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-37.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-38.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-39.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-40.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-41.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-42.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-43.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-44.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-45.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-46.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-47.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-48.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-49.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-50.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-51.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-52.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-53.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-54.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-55.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-56.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-57.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-58.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-59.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:16 AM

 

lls-dirLock

0.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-1.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-2.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-3.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-4.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-5.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-6.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-7.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-8.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-9.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-10.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-11.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-12.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-13.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-14.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-15.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-16.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-17.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-18.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-19.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-20.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-21.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-22.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

lls-23.vswp

1,048,576.00 KB

File

[SSD Cache] 50f94172-620a-3548-9909-001e676b8295/hostCache

1/18/2020 12:07:15 AM

 

 

 

Sorry for the long post. Thank you all for taking a look.

 

-Jason

usb passthrough broken after updating 6.7.0 build-9484548 to 6.7.0 build-15160138

$
0
0

Anyone else experienced usb storage device passthrough not working in 6.7.0 build-15160138?  Lsusb shows the host itself can see the Drobo usb device but it's no longer available to select as a usb device in the vm config.  There is mention of a usb update in this build's VMW_bootbank_vmkusb_0.1-1vmw.670.3.89.15160138 that may or may not be relevant: VMware ESXi 6.7, Patch Release ESXi670-201912001

 

 

Lenovo 10MR0004US host

 

update process used which is nothing unusual:

:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-20191204001-standard

[InstallationError]

[Errno 28] No space left on device

       vibs = VMware_locker_tools-light_11.0.1.14773994-15160134

Please refer to the log file for more details.

~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-20191204001-no-tools

Update Result

   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

   Reboot Required: true

(rebooted)

~] esxcli software vib install -v https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_11.0.1.14773994-15160134.vib

Installation Result

   Message: Operation finished successfully.

   Reboot Required: false

   VIBs Installed: VMware_locker_tools-light_11.0.1.14773994-15160134

   VIBs Removed: VMware_locker_tools-light_10.2.1.8267844-8941472

   VIBs Skipped:

 

 

 

 

after update:

 

~] lsusb

Bus 001 Device 003: ID 19b9:3443 Data Robotics

Bus 001 Device 002: ID 8087:0a2b Intel Corp.

Bus 001 Device 001: ID 0e0f:8003 VMware, Inc. Root Hub

 

 

~] lsusb -v

Bus 001 Device 003: ID 19b9:3443 Data Robotics

Device Descriptor:

  bLength                18

  bDescriptorType         1

  bcdUSB               3.00

  bDeviceClass            0 (Defined at Interface level)

  bDeviceSubClass         0

  bDeviceProtocol         0

  bMaxPacketSize0         9

  idVendor           0x19b9 Data Robotics

  idProduct          0x3443

  bcdDevice            0.00

  iManufacturer           1 Drobo

  iProduct                2 Drobo5C

  iSerial                 3

  bNumConfigurations      1

  Configuration Descriptor:

    bLength                 9

    bDescriptorType         2

    wTotalLength           44

    bNumInterfaces          1

    bConfigurationValue     1

    iConfiguration          0

    bmAttributes         0xc0

      Self Powered

    MaxPower                0mA

    Interface Descriptor:

      bLength                 9

      bDescriptorType         4

      bInterfaceNumber        0

      bAlternateSetting       0

      bNumEndpoints           2

      bInterfaceClass         8 Mass Storage

      bInterfaceSubClass      6 SCSI

 

 

 

Prior to the update the Drobo would be listed in this usb device dropdown - it's not available in a new vm created as a 6.7u2 vm either:

 

 

 

:~] esxcli software profile get

(Updated) ESXi-6.7.0-20191204001-no-tools

   Name: (Updated) ESXi-6.7.0-20191204001-no-tools

   Vendor: Lenovo

Iscsi datastore dqlen change

$
0
0

Hi,

 

First of all, this is pre prod environment.

 

I have connected my esxi 6.7 cluster Esxi hosts with s2d cluster.

S2d cluster has two windows 2019 servers. Each server has local nvme, ssd and HDD disks. We configured with these three type of mentioned disks with s2d cluster.

Provisioned the s2d storage to esxi cluster via iscsi.

 

 

I have a latency issues on esxi vms.

From esxi,it uses software iscsi adapter.

I have disabled delayed acknowledgement.

I have changed the iscsi max io size to 512 in command line.

I have changed the iscsi datastore disk queue depth to 64 from 128 via command line(dqlen) . However, it automatically changes while do file copy from windows 2019.I don't know why?

 

But still, I'm seeing the less throughout (when I copy any files between drives in windows 2019).In ESXTOP-V-- I'm seeing wr/latency as 80 ms.

Any idea?  Any other setting is needed in esxi to get good throughout?

 

Please note, this is not a production setup and like to get advice from you.

This is not a compatible setup. I'm aware.

 

Thank you

IPMI errors from sensord

$
0
0

Hello,

 

On ESXi 6.7u3b, supermicro hardware platform, i'm receiving lots of errors from sensord. What could it be? I don't see any network traffic attempts to IPMI hardware, all sensors on hardware tab are green, i'm getting this error every 5 seconds:

 

sensord[2099295]: recv_reply: bmc timeout after 20000 millisconds

sensord[2099295]: ipmi_completion: no reply, failed to communicate with bmc

 

and this message each 30 seconds:

 

Hostd: info hostd[2099828] [Originator@6876 sub=Default] IPMI SEL sync took 0 seconds 0 sel records, last 3

 

What could be reason for this? Should I disable IPMI on ESXi? (VMkernel.Boot.ipmiEnabled = false)


Move working directory with snapshots

$
0
0

I have a VMware server that will be retired within the next three months due to age.  I have been getting alerts that there are snapshots that require consolidation.  When I try to consolidate them, it errors out with Input/Output error it seems the storage disks are having some issues.

 

There are three virtual disks on this particular VM, and they each have almost 150 snapshot files, and they keep getting made.  I have tried the suggestion of manually creating a snapshot then clicking "Delete all snapshots" but that fix the issue.

 

Today, I attached a NAS, so I could move the VMDKs to it.  I also found out that you can change the "working directory" for where the snapshots are kept, and I thought I might be able to change that to the NAS, but I am not sure if you can do it with existing snapshot files?

 

I was thinking that maybe this would work:

 

  1. Shut down the VM
  2. Edit the .vmx file to add the line to point to the working directory
  3. Move all the snapshot files to the new working directory
  4. Fire up the VM

 

That sounds too easy, though and I am sure there is a whole bunch missing.  I guess my question basically is, how is the best way to handle this?  I suppose I *could* manually copy all the VMDK files and related snapshots to the NAS manually, then remove the VMDKS and re-add them as existing drives?  If I do that, I assume I would select the most recent snapshot file as the VMDK to use?

 

Thanks! :-)

"Lost Access To Volume" - tons of error messages

$
0
0

Hello,

 

I get like 1 thousand error messages everyday about losing connections to datastores, I thought it's just a bug in ESXi 6.0, and it will go after upgrading to 6.7, but this didn't happen, IO still get the same message after deploying a fresh installation of ESXi 6.7. We have many HPE server models, and many NETAPP enterprise SAN storages, connected via eight Brocade 6510 switches, and neither the storage systems, nor the SAN switches report any issues or latency issues. Also, VMware support has confirmed that all of the H/W drivers are compatible with the hardware and firmware versions.

 

I can't tell if we are facing performance issues or not, but mostly not, and I'm intending to perform a SAN fabric reboot to test the paths from servers to SAN, but with these errors I wont.

 

Has anyone faced the same issue please?

 

Thank you,

Integrating ESXi host with AD will affect the legacy application services - will it involve Huge risk ?

$
0
0

Hi All,

 

We are planning to integrate the ESXi host with AD. There are 2 domain in our environment. Domain A ( legacy and its going to be retired soon) and Domain B.

All the like Biztalk, Astea, Mastermind,Citrix,goes on..which works on a two way interface, is it huge risk if we integrate ESXi host with AD with Domain B ? Even we have other option to integrate the host which is in Domain A with AD but still application team see a huge risk. Any pointers will be much appreciated for guiding in right direction. I'm ready to brief more on this if required

 

Thanks

V

one host with vcenter server - remediation

$
0
0

I have only one host in my lab with vcenter running on as VM.

 

It is possible to "stage" updates but of course to remediate the vcenter server must be shutdown.

 

At this point of time all needed patches should be located on the esxi.

 

Is it possible to manual start the remediation after shutting down vcenter server and putting esxi host into maintenance mode?

Synergy 4820C 10/20/25Gb CNA Mezzanine card with ESXi 6.5

$
0
0

My Synergy g10 blades have Synergy 4820C 10/20/25Gb CNA Mezzanine with ESXi 6.5 U2 installed.

I have realized that on some bladed in my cluster I see the storage adapter as vmhbaXX/XX and on some blades it is reported as vmnic

 

below are the driver and firmware versions of the card. I need to know why is it reporting liek that with driver, firmware and ESXi versions being the same.

 

any help is much appreciated

 

Driver Info: qedf:1.3.22.0

Host Device Name: vmhba65, cdev Name: qedfc1

ISP: ISP165c

Firmware Version: 08.37.09.00

MFW Version: 08.37.15.00

Synergy 4820C 10/20/25Gb CNA Mezzanine Slot 3 08.37.34

Viewing all 4823 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>