Quantcast
Channel: VMware Communities : Unanswered Discussions - ESXi
Viewing all 4823 articles
Browse latest View live

Samsung SSD (ATA) - Reported temperature too high

$
0
0

Using ESXI 6.7u3

Have 3 SSD drives in a 1U Supermicro SYS-5019D-FN8TP

1x Samsung EVO 840 (ESXI install)

1x Samsung 970 PRO nvme (Datastore)

1x Samsung 960 PRO SATA  (Datastore)

 

Via cli, I can see the following:
CLI.png

Both SATA drives are reported as having 73 degrees while the M.2 drive as 30 degrees.
I have tested with a live Linux USB and all 3 SSDs are idling at 25-30 degrees Celsius, with a maximum temperature of  ~35 degrees for the two SATA and ~55 for the M.2 under load.Ambient temperature is 22 degrees Celsius.

I am pretty sure the reported values for both SATA SSDs are incorrect, but then again, I not 100%.

 

Anyone with similar Samsung SSDs can please tell me how their devices are reported temperature wise?

 

Thank you


ESXI 6.7: VM migration between hosts not possible if the VM has a snapshot "Unable to enumerate all disks"

$
0
0

We have a bunch of standalone ESXi 6.7.0 (Build 8169922) hosts with local storage. I've run into the problem that when I move a VM from host A to host B using an SCP transfer, and that VM currently has a snapshot, it will not start on host B. The error message is "Failed to power on virtual machine. Unable to enumerate all disks."

 

Now I've been running ESXi hosts for probably a decade and this always used to work without a problem, so I think this is a problem starting in ESXi > 6.5 maybe? I do nothing weird, shutdown the VM on host A, do an SCP of the entire VM directory to host B and boot it there. It only happens when the VM has a snapshot. If I SCP the same VM without a snapshot it will boot fine on host B. If I edit out the snapshot of the VM on host B after the transfer (by editing/removing references to the snap manually) it will also boot fine.

 

I have no clue why this won't work, because all the references for the snapshot should be in the VM directory (specifically the .vmsn and .vmsd file), not on the ESXi host. The hosts are the same ESXi version.

 

When I try to boot the VM and get the mentioned error, nothing is written to vmware.log in the VM directory.  Also the ESXi host seems to write no logs about the error.

 

If I attempt to consolidate disks on the host B webinterface I get "Disk consolidation for VM has failed: The operation is not supported on the object". Also the snapshot manager in the web interface does show the current snapshots, but I can't delete them or create a new snapshot. All give a generic "failed" error.

 

Any help appreciated or confirmation that you have the same issue. If there are workarounds I'd love to hear them!

How to change (increase) size of ESXi system partitions?

$
0
0

Hi all!

 

After installing ESXi host I got his partition table:

[root@esx:~] df -h

Filesystem   Size   Used Available Use% Mounted on

vfat       285.8M 172.9M    112.9M  60% /vmfs/volumes/5c52dbe5-9ee717e2-6f58-3a9fa14000ba

vfat       249.7M 227.3M     22.4M  91% /vmfs/volumes/5039b0eb-245b9a3e-d595-58efc249425a

vfat       249.7M 232.6M     17.1M  93% /vmfs/volumes/07ac5fe1-e6780032-5912-27bc4b104cad

 

[root@esx:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"

gpt

968 255 63 15564800

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

 

mpx.vmhba32:C0:T0:L0 - Is a 8GB SD card.

 

Now I want to increase size of partition to be able to install VMWare updates (there no spase to prestage them). How I can do this? Is this only way is a reinstalling ESXi host?

vCPU and Memory Allocation preferred combination..?

$
0
0

I'm a bit confused on my vCPU and particularly memory allocation. Can anyone provide some insight?  My system is in GCP With 1 vCPU, 4 GB memory.

 

And after that  every day some point of time i used to get 522 time out error with my showing that ( The origin web server timed out responding to this request.)

Screenshot (415).png

 

Can anyone explain how the Memory limit for a VM works?

 

i mean what is the best combination i can use for it .so that my VM resources get this error.

change configuration log and Authentication logs

$
0
0

Hello

I want to know which logs in ESXi show change configuration logs and Authentication logs?

where are they located and how can I only get those logs?

how can I find them in syslog logs?

 

Thanks a lot

How to 'display' guest OS on the host machine through cli

$
0
0

I've installed ESXI on an old laptop as a way to learn how to use it.

I've also installed a vm machine, which I can manage from the web interface on my pc.

 

Now I need to use the Guest os from the laptop ( on which ESXI installation and the guest OS installation reside ).

I enabled the shell and the ssh. I also powered on the vm from the comman line. But how do I display the virtual machine on the laptop?

CAn't start vmware vm in current state.

$
0
0

Hello,

I am John Tankersley. I am running vmware on Esxi 6.0 on a HP  P4300 G2.

I have 2 xeon processors  running 4 cores each or total 8 cores at 2.6 Ghz.

I have about 8 gigabyes of ram .

I can log into vmware 6.0 Esxi on the server putting the ip address into a browser on a client machine and gain access.

I can't start the virtual machine installed. I keep getting that the vm can't be started in the current state.

Can anyone give me any help?

John Tankersley

john_tnkrsly@yahoo.com

Thank-you.

Newbie question - Booting ESXi via USB stick

$
0
0
Hello,

 

 

I've been using an Intel NUC 8th gen as my home ESXi 6.7 U3 server 6 months, what I did was install ESXi on the internal SSD drive and it's been working well. Something has corrupted my host recently and I want to rebuild. On the SSD is also my Datastore 1 and I also have another SSD inside with Datastore 2.

 

 

Anyway many have said boot off a USB key instead and just keep the datastore 1 and 2 on each disk, I like this idea.

 

 

Well I just used Rufus and built a bootable USB drive with 6.7 U3 and it boots off the USB key fine, but gets to a point where it can't find a network card/device and has to halt. It has a 1GB port and what is strange when I installed this 6 months ago to the internal SSD it installed find it, why is booting off USB different? Do I need to inject the drivers?

 

 

If I reboot into my corrupt host I can ping the host so the NIC is fine and recognised.

 

 

Don't know what to do now

 

 

Any ideas?

 

 

Thanks

 


The ramdisk 'var' is full

$
0
0

Hi guys.

 

I am thinking of doing a reboot of this host with the error message. Wanting to know is there anything else I can check prior to a reboot?

I am not sure how to get onto host to check which files are there.

 

Thanks

Failed to lock the file

$
0
0

I have esxi 5.1 standalone and there are 3 vm running on the esxi host. a virtual machine crashes on that host then ı reboot this vm . what is the problem. i get this warning on the host

 

 

Lost access to volume

57beefa6-e02fd332-e776-901b0e6c7e5c (datastore1)

due to connectivity issues. Recovery attempt is in

progress and outcome will be reported shortly.

info

01.02.2020 20:24:40

datastore1

 

 

Message on appa-01: The operation on the file "/vmfs/

devices/deltadisks/23bbc4f-vm-100-disk-1-s001-s001-s0-

01-s001.vmdk" failed (Failed to lock the file). The file

system where disk "/vmfs/devices/deltadisks/23bbc4f-vm

-100-disk-1-s001-s001-s001-s001.vmdk" resides is full.

Select Retry to attempt the operation again. Select

Cancel to end the session.

info

01.02.2020 20:25:27

appa-01

User

ESXi 6.7 with "net51-r8169" network driver, dependency error of "vmkapi"

$
0
0

Hello,
I must virtualize my dedicated server with ESXi. Processor of the server is a Ryzen 3600 and I need to use ESXi 6.7 or newer. Server's network adapter is RealTek RTL-8169 and it's driver hasn't been supported by default since ESXi 6.1 (https://vibsdepot.v-front.de/wiki/index.php/Net51-r8169 ). This page says the driver does NOT work With: ESXi 6.7 and newer.
I tried to inject the driver with VMWare PowerCLI into latest ESXi 6.7 but some kind of a dependency problem occurred and below are the logs:

 

PowerCLI C:\> .\ESXi\ESXi-Customizer-PS-v2.6.0.ps1 -v67 -vft -load sata-xahci,esxcli-shell,net51-r8169

 

This is ESXi-Customizer-PS Version 2.6.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)

(Call with -help for instructions)

 

Logging to C:\Users\mbdor\AppData\Local\Temp\ESXi-Customizer-PS-29064.log ...

 

Running with PowerShell version 5.1 and VMware PowerCLI version 6.5.0.2604913

 

Connecting the VMware ESXi Online depot ... [OK]

 

Connecting the V-Front Online depot ... [OK]

 

Getting Imageprofiles, please wait ... [OK]

 

Using Imageprofile ESXi-6.7.0-20191204001-standard ...

(dated 11/25/2019 11:42:42, AcceptanceLevel: PartnerSupported,

Updates ESXi 6.7 Image Profile-ESXi-6.7.0-20191204001-standard)

 

Load additional VIBs from Online depots ...

   Add VIB sata-xahci 1.42-1 [New AcceptanceLevel: CommunitySupported] [OK, added]

   Add VIB esxcli-shell 1.1.0-15 [OK, added]

   Add VIB net51-r8169 6.011.00-2vft.510.0.0.799733 [OK, added]

 

Exporting the Imageprofile to 'C:\ESXi\ESXi-6.7.0-20191204001-standard-customized.iso'. Please be patient ...

 

WARNING: The image profile fails validation.  The ISO / Offline Bundle will still be generated but may contain errors and may not boot or be functional.  Errors:

WARNING:   VIB VFrontDe_bootbank_net51-r8169_6.011.00-2vft.510.0.0.799733 requires vmkapi_2_1_0_0, but the requirement cannot be satisfied within the ImageProfile. However, additional VIB(s)

VMware_bootbank_esx-base_5.5.0-0.14.1598313, VMware_bootbank_esx-base_6.0.0-3.87.8934903, VMware_bootbank_esx-base_5.1.0-3.52.2575044, VMware_bootbank_esx-base_6.0.0-3.100.9313334,

VMware_bootbank_esx-base_5.1.0-0.5.838463, VMware_bootbank_esx-base_6.0.0-3.57.5050593, VMware_bootbank_esx-base_6.0.0-0.0.2494585, VMware_bootbank_esx-base_6.5.0-1.29.6765664,

VMware_bootbank_esx-base_6.0.0-0.8.2809111, VMware_bootbank_esx-base_5.1.0-1.19.1312874, VMware_bootbank_esx-base_6.5.0-2.71.10868328, VMware_bootbank_esx-base_6.0.0-3.76.6856897,

VMware_bootbank_esx-base_5.1.0-1.15.1142907, VMware_bootbank_esx-base_5.1.0-2.29.1900470, VMware_bootbank_esx-base_5.1.0-3.60.3070626, VMware_bootbank_esx-base_5.1.0-3.82.3872638,

VMware_bootbank_esx-base_6.5.0-2.61.10175896, VMware_bootbank_esx-base_6.5.0-3.96.13932383, VMware_bootbank_esx-base_6.5.0-0.11.5146843, VMware_bootbank_esx-base_5.5.0-3.114.7967571,

VMware_bootbank_esx-base_5.5.0-1.28.1892794, VMware_bootbank_esx-base_6.0.0-2.37.3825889, VMware_bootbank_esx-base_6.0.0-0.6.2715440, VMware_bootbank_esx-base_6.5.0-2.64.10390116,

VMware_bootbank_esx-base_5.5.0-2.54.2403361, VMware_bootbank_esx-base_5.1.0-2.47.2323231, VMware_bootbank_esx-base_6.0.0-0.11.2809209, VMware_bootbank_esx-base_5.1.0-3.50.2323236,

VMware_bootbank_esx-base_5.1.0-1.16.1157734, VMware_bootbank_esx-base_6.0.0-1.20.3073146, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-base_6.5.0-3.101.14320405,

VMware_bootbank_esx-base_5.5.0-2.62.2718055, VMware_bootbank_esx-base_6.5.0-1.26.5969303, VMware_bootbank_esx-base_6.5.0-0.9.4887370, VMware_bootbank_esx-base_5.1.0-1.22.1472666,

VMware_bootbank_esx-base_5.1.0-0.10.1021289, VMware_bootbank_esx-base_6.0.0-3.113.13003896, VMware_bootbank_esx-base_5.1.0-3.85.3872664, VMware_bootbank_esx-base_6.0.0-3.58.5224934,

VMware_bootbank_esx-base_5.5.0-3.107.7618464, VMware_bootbank_esx-base_5.5.0-0.0.1331820, VMware_bootbank_esx-base_5.5.0-3.86.4179631, VMware_bootbank_esx-base_5.5.0-3.71.3116895,

VMware_bootbank_esx-base_6.0.0-3.138.15169789, VMware_bootbank_esx-base_6.0.0-0.14.3017641, VMware_bootbank_esx-base_6.0.0-3.72.6765062, VMware_bootbank_esx-base_6.0.0-0.5.2615704,

VMware_bootbank_esx-base_6.5.0-0.14.5146846, VMware_bootbank_esx-base_6.5.0-2.83.13004031, VMware_bootbank_esx-base_6.0.0-3.66.5485776, VMware_bootbank_esx-base_5.1.0-0.9.914609,

VMware_bootbank_esx-base_5.1.0-0.0.799733, VMware_bootbank_esx-base_6.5.0-2.57.9298722, VMware_bootbank_esx-base_6.5.0-3.108.14990892, VMware_bootbank_esx-base_5.5.0-3.103.6480267,

VMware_bootbank_esx-base_6.5.0-1.33.7273056, VMware_bootbank_esx-base_6.5.0-2.50.8294253, VMware_bootbank_esx-base_5.1.0-2.41.2191354, VMware_bootbank_esx-base_5.5.0-1.18.1881737,

VMware_bootbank_esx-base_5.1.0-2.28.1743533, VMware_bootbank_esx-base_6.0.0-1.26.3380124, VMware_bootbank_esx-base_6.0.0-3.129.14513180, VMware_bootbank_esx-base_5.5.0-3.81.3343343,

VMware_bootbank_esx-base_6.0.0-3.79.6921384, VMware_bootbank_esx-base_5.5.0-2.42.2302651, VMware_bootbank_esx-base_5.5.0-3.106.6480324, VMware_bootbank_esx-base_6.0.0-2.40.4179598,

VMware_bootbank_esx-base_5.1.0-1.20.1312873, VMware_bootbank_esx-base_5.5.0-3.124.9919047, VMware_bootbank_esx-base_5.5.0-3.120.9313066, VMware_bootbank_esx-base_5.1.0-3.55.2583090,

VMware_bootbank_esx-base_5.1.0-2.27.1743201, VMware_bootbank_esx-base_6.0.0-1.22.3247720, VMware_bootbank_esx-base_5.5.0-3.95.4345813, VMware_bootbank_esx-base_5.5.0-3.117.8934887,

VMware_bootbank_esx-base_5.5.0-1.16.1746018, VMware_bootbank_esx-base_6.5.0-0.23.5969300, VMware_bootbank_esx-base_6.5.0-3.105.14874964, VMware_bootbank_esx-base_6.0.0-3.93.9239792,

VMware_bootbank_esx-base_5.5.0-2.58.2638301, VMware_bootbank_esx-base_5.5.0-2.59.2702869, VMware_bootbank_esx-base_6.0.0-1.31.3568943, VMware_bootbank_esx-base_6.5.0-2.67.10719125,

VMware_bootbank_esx-base_5.5.0-3.89.4179633, VMware_bootbank_esx-base_5.1.0-3.57.3021178, VMware_bootbank_esx-base_5.1.0-0.8.911593, VMware_bootbank_esx-base_5.5.0-3.75.3247226,

VMware_bootbank_esx-base_6.0.0-3.116.13635687, VMware_bootbank_esx-base_5.5.0-2.55.2456374, VMware_bootbank_esx-base_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.0.0-2.52.4600944,

VMware_bootbank_esx-base_5.5.0-2.39.2143827, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-base_5.5.0-1.15.1623387, VMware_bootbank_esx-base_6.0.0-2.43.4192238,

VMware_bootbank_esx-base_6.0.0-3.96.9239799, VMware_bootbank_esx-base_5.5.0-2.65.3029837, VMware_bootbank_esx-base_6.5.0-0.15.5224529, VMware_bootbank_esx-base_6.5.0-3.111.15177306,

VMware_bootbank_esx-base_5.5.0-3.101.5230635, VMware_bootbank_esx-base_5.1.0-2.23.1483097, VMware_bootbank_esx-base_6.0.0-1.29.3568940, VMware_bootbank_esx-base_6.0.0-2.54.5047589,

VMware_bootbank_esx-base_6.5.0-1.47.8285314, VMware_bootbank_esx-base_5.1.0-2.32.1904929, VMware_bootbank_esx-base_5.5.0-3.84.3568722, VMware_bootbank_esx-base_5.5.0-0.15.1746974,

VMware_bootbank_esx-base_5.5.0-3.100.4722766, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-base_6.0.0-3.69.5572656, VMware_bootbank_esx-base_6.0.0-2.49.4558694,

VMware_bootbank_esx-base_5.5.0-1.25.1892623, VMware_bootbank_esx-base_6.0.0-3.84.7967664, VMware_bootbank_esx-base_6.0.0-1.17.3029758, VMware_bootbank_esx-base_5.5.0-3.97.4756874,

VMware_bootbank_esx-base_6.0.0-3.107.10474991, VMware_bootbank_esx-base_6.5.0-3.116.15256468, VMware_bootbank_esx-base_6.5.0-0.19.5310538, VMware_bootbank_esx-base_5.5.0-0.8.1474528,

VMware_bootbank_esx-base_6.5.0-2.79.11925212, VMware_bootbank_esx-base_6.5.0-2.88.13635690, VMware_bootbank_esx-base_5.1.0-2.35.2000251, VMware_bootbank_esx-base_5.5.0-1.30.1980513,

VMware_bootbank_esx-base_5.5.0-0.7.1474526, VMware_bootbank_esx-base_5.1.0-1.13.1117900, VMware_bootbank_esx-base_6.0.0-3.135.15018929, VMware_bootbank_esx-base_6.5.0-2.75.10884925,

VMware_bootbank_esx-base_5.5.0-2.33.2068190, VMware_bootbank_esx-base_6.5.0-2.92.13873656, VMware_bootbank_esx-base_5.5.0-3.78.3248547, VMware_bootbank_esx-base_5.1.0-0.11.1063671,

VMware_bootbank_esx-base_6.0.0-1.23.3341439, VMware_bootbank_esx-base_5.1.0-2.26.1612806, VMware_bootbank_esx-base_5.5.0-3.92.4345810, VMware_bootbank_esx-base_5.5.0-2.51.2352327,

VMware_bootbank_esx-base_6.5.0-3.120.15256549, VMware_bootbank_esx-base_6.0.0-2.46.4510822, VMware_bootbank_esx-base_5.5.0-2.36.2093874, VMware_bootbank_esx-base_6.5.0-2.54.8935087,

VMware_bootbank_esx-base_5.5.0-3.68.3029944, VMware_bootbank_esx-base_6.0.0-3.110.10719132, VMware_bootbank_esx-base_5.1.0-2.44.2191751, VMware_bootbank_esx-base_6.0.0-3.125.14475122,

VMware_bootbank_esx-base_5.1.0-1.12.1065491 from depot can satisfy this requirement.

 

All done.

 

I coloured the logs. The red text is the important section of the log. I do not have any idea about "vmkapi", I need help.

ESXTOP - System %RDY

$
0
0

I am looking at my ESXTOP output and from what I can see, my CPU load is OK, and my %PCPU is OK. My question is can anyone tell me what this SYSTEM metric for %Ready is and why it is so high compared to my VMs which seem to have an acceptable %RDY? I am having issues with "pokey" VMs. What I am noticing is that my %VMWAIT seems to spike frequently for a VM but my CPU utilization seems to be acceptable? I know I am missing something here. Thanks.

 

2:36:27pm up 5 days 16:04, 671 worlds, 3 VMs, 14 vCPUs; CPU load average: 0.21, 0.22, 0.24

PCPU USED(%):  20  22  30 1.9  33 3.2 6.7  28 5.0  31 3.6  32  33 0.1  28 1.3 0.2 0.2 0.3  30  28 0.0  32 0.0  15 0.0  11 0.2 7.9 0.1  31 0.0 AVG:  13

PCPU UTIL(%):  19  21  30 2.5  31 3.9 7.4  27 5.4  29 4.0  29  30 0.2  25 1.4 0.2 0.2 0.3  28  26 0.1  29 0.1  13 0.1  10 0.3 8.5 0.1  28 0.1 AVG:  13

CORE UTIL(%):  39      31      33      32      33      33      30      26     0.4      28      26      29      13      10     8.5      29     AVG:  25

 

      ID      GID        NAME                 NWLD      %USED      %RUN      %SYS     %WAIT    %VMWAIT     %RDY    %IDLE   %OVRLP    %CSTP   %MLMTD   %SWPWT

   13679    13679   AAAASERVER02     16           153.50        140.51      0.05       1468.00       0.43             0.13       362.65      0.18          0.00          0.00           0.00

  133816   133816 ZZZSERVER02      14          130.42        120.65      0.09         1285.00      16.59             0.11      264.48       0.14         0.00          0.00            0.00

   13664    13664    PP01                      15          54.34          50.21        0.11        1457.78      2.39              0.05      450.44       0.12         0.00          0.00            0.00

   1              1           system                 298           1.12           2875.21    0.00        26664.11       -               341.75    0.00         0.93         0.00          0.00           0.00

Wrong CPU type

$
0
0

Hello,

we did some Migration of VM´s from older Server to some new.

I changed the EVC mode for each cluster to highest Level.

No change of the HW Level was done.

 

Now we find out, the wronh Prozessor Type was shown (the old one), even after a reboot.

Doing HW Upgrade and Tools fixed this.

 

Due to VMware aren´t emuleting the CPU, shouldn´t they show the right Prozessor type after a reboot, ss the VM started I looks for the CPU..

Random restarts of Server 2019 on ESX 6.0.0

$
0
0

I have a freshly installed Server 2019 that randomly restarts. The event log shows an uptime event, and the next event, nine seconds later, is a bootup event. There is also a Hyper-V machine running Server 2019 that disappears after these boots. But after rebooting another time, the Hyper-V guest reappears. This is the second time I have encountered this in about a week.

 

Server 2019 is supported by ESX 6.0.0.

esxi upgrade

$
0
0

we have psod error on esxi hosts. the problem nic driver. now we want to upgrade esxi host and nic, firmware etc..

psod problem usually happens when moving virtual machine to another host then suddenly esxi host crashes . I want to ask something, how can I upgrade esxi host without moving virtual machines. some servers have to work 24/7 . maybe esxi may shut down when moving servers

 

how can i upgrade without turning off the servers also there is psod bug .


Corrupted ISCSI LUN used as a VMFS Datastore

$
0
0

I would like to ask for assistance regarding an ISCSI LUN used as a datastore. It started when I upgraded my ESXI host to 6.7. I have created a dump file . I would like to know how to proceed.

Your help is much appreciated mr. continuum.

Thanks

[ warning] [guestinfo] GuestInfoGetDiskDevice: Missing disk device name; VMDK mapping unavailable for "/", fsName: "/dev/sda2"

$
0
0

After updating open-vm-tools 11.0.1 errors began to appear in the logs, they are recorded every minute:

 

[ warning] [guestinfo] GuestInfoGetDiskDevice: Missing disk device name; VMDK mapping unavailable for "/", fsName: "/dev/sda2"

 

 

How to solve a problem?

 

Ubuntu 18.4.3

ESXI 6.7.0 Update 3 (Build 15160138)

Overlapping Partitions

$
0
0

Can't have overlapping partitions.

Unable to read partition table for device /vmfs/devices/disks/

CRASH & purple screen on ESX 6.5.0 exception 14

$
0
0

Hello,

I'm using ESXi 6.5.0 Releasebuild 5310538 without any problems till now.

Today i got a purple screen with exception 14.

 

I used the search function of this forum, read through some threads and posts and i found out that sometines its a hardware problem , sometines a software failure but it also could be (on older versions) a problem with the network card.

 

How can i know how to follow up on this? Any suggestions what to check or what to do?

Screenshot attached.

esx_error_slavkali.JPG

A cold reboot of the system solved the problem for now but i want to find the root cause.

 

I'll now go and collect all neccessary log files.

 

Thank you for your help,

LLDP+Broadcom 10/25g

$
0
0

VMware ESXi, 6.7.0, 15160138.

 

Earlier many of us had issues with Intel X710 nics having a hardware LLDP agent that made LLDP unavailable from the VMware side of things. There is a similar issue with broadcom nics, but LLDP works fine from the VMware side. From the switch side it does however get announments both from VMware and from the nic itself. While VMware's LLDP agent transmits both the server hostname and vmnic, the hardware nic agent only transmits the physical mac address.

 

It looks like different switch vendors handle this scenario differently. Cisco seems to only store the last value it received while Arista stores both values.

As we can see here, also from the Arista management UI we see that the MAC address is listed first and in many views only the first line is used.

 

As far as I can tell there is no parameter available in the ESXi 6.7 bnxtnet driver to disable LLDP in the same way we could on Intel x710:

] esxcli system module parameters list -m bnxtnet
Name                          Type          Value  Description
----------------------------  ------------  -----  --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DRSS                          array of int         Number of RSS Queues to create on the Default Queue RSS pool. [Default: 4, Max: 16]
RSS                           array of int         Number of RSS queues to create in netqueue RSS pool. [Default: 4, Max: 16]
async_cmd_cmpl_timeout        uint                 For debug purposes, currently set to 10000 msec's, user can increase it. [Default: 10000, Min: 10000]
debug                         uint                 Debug msglevel: Default is 0 for Release builds
disable_dcb                   bool                 Disable the DCB support. 0: enable DCB support, 1: disable DCB support. [Default: 1]
disable_fwdmp                 bool                 For debug purposes, disable firmware dump feature when set to value of 1. [Default: 0]
disable_geneve_filter         bool                 For debug purposes, disable Geneve filter support feature when set to value of 1. [Default: 0]
disable_geneve_oam_support    bool                 For debug purposes, disable Geneve OAM frame support feature when set to value of 1. [Default: 1]
disable_q_feat_pair           bool                 For debug purposes, disable queue pairing feature when set to value of 1. [Default: 0]
disable_q_feat_preempt        bool                 For debug purposes, disable FEAT_PREEMPTIBLE when set to value of 1. [Default: 0]
disable_roce                  bool                 Disable the RoCE support. 0: Enable RoCE support, 1: Disable RoCE support. [Default: 1]
disable_shared_rings          bool                 Disable sharing of Tx and Rx rings support. 0: Enable sharing, 1: Disable sharing. [Default: 0]
disable_tpa                   bool                 Disable the TPA(LRO) feature. 0: enable TPA, 1: disable TPA. [Default: 0]
disable_vxlan_filter          bool                 For debug purposes, disable VXLAN filter support feature when set to value of 1. [Default: 0]
enable_default_queue_filters  int                  Allow filters on the default queue. -1: auto, 0: disallow, 1: allow. [Default: -1, which enables the feature when NPAR mode and/or VFs are enabled, and disables if otherwise]
enable_dr_asserts             bool                 For debug purposes, set to 1 to enable driver assert on failure paths, set to 0 to disable driver asserts. [Default: 0]
enable_geneve_ofld            bool                 Enable Geneve TSO/CSO offload support. 0: disable Geneve offload, 1: enable Geneve offload. [Default: 1]
enable_host_dcbd              bool                 Enable host DCBX agent. 0: disable host DCBX agent, 1: enable host DCBX agent. [Default: 0]
enable_r_writes               bool                 For debug purposes, set to 1 to enable r writes, set to 0 to disable r writes. [Default: 0]
enable_vxlan_ofld             bool                 Enable VXLAN TSO/CSO offload support. 0: disable, 1: enable. [Default: 1]
force_hwq                     array of int         Max number of hardware queues: -1: auto-configured, 1: single queue, 2..N: enable this many hardware queues. [Default: -1]
int_mode                      uint                 Force interrupt mode. 0: MSIX; 1: INT#x. [Default: 0]
max_vfs                       array of int         Number of Virtual Functions: 0: disable, N: enable this many VFs. [Default: 0]
multi_rx_filters              int                  Define the number of RX filters per NetQueue: -1: use the default number of RX filters, 0,1: disable use of multiple RX filters, so single filter per queue, 2..N: force the number of RX filters to use for a NetQueue. [Default: -1]
psod_on_tx_tmo                bool                 For debug purposes, set to 1 to force PSOD on tx timeout, set to 0 to disable PSOD on tx timeout. [Default: 0]

 

Nic firmware and driver is:

] esxcli network nic get -n vmnic4   Advertised Auto Negotiation: true   Advertised Link Modes: 1000BaseCR1/Full, 25000BaseCR1/Full, Auto   Auto Negotiation: true   Cable Type: DA   Current Message Level: 0   Driver Info:         Bus Info: 0000:a1:00:0         Driver: bnxtnet         Firmware Version: 214.0.253.1         Version: 214.0.230.0

 

Anyone seen this issue before?

 

Lars

Viewing all 4823 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>