GPU pci-e configuration question

My new GPU requires 3 8 pin connectors. My PSU came with 2 single 6+2 connectors and 2 dual 6+2 connectors (aka pig tailed/daisy chained/whatever). Would it be ok to use the 2 single 6+2 and use one of the dual connectors as the 3rd, and just leave the pigtailed connector just dangling off to the side unused?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/SaintJ92
πŸ“…︎ Jan 12 2021
🚨︎ report
Unable to complete install: 'unsupported configuration: host doesn't support passthrough of host PCI devices'

Unable to complete install: 'unsupported configuration: host doesn't support passthrough of host PCI devices'

I've literally been punching my monitor the past 2 hours trying to fix this, going through tutorials, that arch page that's so vague it might as well be in Chinese, and loads of other stuff. Please, tell me specifically what I have done wrong, it's getting to my head and I'm getting so mad over this.

KDE Neon

Unable to complete install: 'unsupported configuration: host doesn't support passthrough of host PCI devices'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createvm.py", line 2089, in _do_async_install
    guest.installer_instance.start_install(guest, meter=meter)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 542, in start_install
    domain = self._create_guest(
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 491, in _create_guest
    domain = self.conn.createXML(install_xml or final_xml, 0)
  File "/usr/lib/python3/dist-packages/libvirt.py", line 4034, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: unsupported configuration: host doesn't support passthrough of host PCI devices
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/Zentoxy
πŸ“…︎ Dec 13 2020
🚨︎ report
QEMU/KVM: unsupported configuration: pci backend driver 'default' is not supported (GPU passthrough)

I've enabled VT-d on the BIOS and added `intel_iommu=on` on the kernel parameters

$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.4.0-58-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro intel_iommu=on

Loaded successfully:

$ dmesg | grep IOMMU
[    0.035043] DMAR: IOMMU enabled
[    0.074242] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed90000 IOMMU 0

I added the GPU passthrough in Virt Manager:

https://preview.redd.it/njrd5mng03961.png?width=640&format=png&auto=webp&s=6adb98188364dea63458bc55ea03f976ed9eda5f

I see these errors in the journalctl when I try to run it:

$ journalctl -u libvirtd --since "10 minutes ago"
-- Logs begin at Sat 2020-12-26 19:46:44 UTC, end at Sun 2021-01-03 08:45:30 UTC. --
Jan 03 08:36:12 skylake libvirtd[1289]: libvirt version: 6.0.0, package: 0ubuntu8.5 (Christian Ehrhardt <christian.ehrhardt@canonical.com> Thu, 08 Oct 2020 07:36:06>
Jan 03 08:36:12 skylake libvirtd[1289]: hostname: skylake
Jan 03 08:36:12 skylake libvirtd[1289]: unsupported configuration: pci backend driver 'default' is not supported
Jan 03 08:36:12 skylake libvirtd[1289]: Failed to allocate PCI device list: unsupported configuration: pci backend driver 'default' is not supported

But I don't see how can I change the driver, or even what value should I set it to?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/sp00ky31
πŸ“…︎ Jan 03 2021
🚨︎ report
Please help me! First time boot and stuck at "pci configuration begin"

stuck at pci configuration begin

OC file

Hardware:

CPU: Intel Core i5-8250U

GPU:Intel UHD Graphics 620 iGPU , Nvidia Geforce MX150

RAM: 8GB

Motherboard/Laptop model: Acer Aspire A515-59Z0

Audio Codec : Realtek High Definition Audio(SST)

Ethernet Card: Realtek PCIe GBE Family Controller

Wifi/BT Card: Qualcomm Atheros QCA9377 Wireless Network Adapter

What guide/tool followed: [Dortania](https://dortania.github.io/OpenCore-Install-Guide/)

Issue:

Hello, I was doing this because of school project that needed XCODE. I got stuck on "pci configuration begin". I have done npci=0x2000, npci=0x3000, -wegnoegpu each time to the boot args and still encounter this problem. I am also using OpenCore v0.6.2.

What files/config I am using: OpenCore v0.6.2 Intel Coffee Lake

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/santicazo
πŸ“…︎ Oct 07 2020
🚨︎ report
H370-i Stuck on PCI Configuration End
MOBO: ASUS ROG STRIX H370-I GAMING
CPU: I5-8400
GPU: Intel UHD 630
RAM: Crucial 1x16gb DDR4 2400M
Ethernet Card: Intel I219V (built into mobo)

BIOS Settings:

>VT-d: Disabled
>
>Secure Boot Mode: Disabled
>
>OS Type: Other OS
>
>XHCI Handoff: Enabled
>
>Cant find Serial Port and CFG-Lock

UPDATE: I’ve shifted away from Clover to Opencore instead, and what do you know, i succeeded on the first day! Though it’s more tedious, the process of going through the opencore guide really helps in understanding the different components, and makes troubleshooting easier - as you would know what affects what. Read through every page of the guide even if it doesn’t seem to apply to you! :)

Hi guys, this is my first time trying to make a SFF hackintosh build with Catalina 10.15.

I followed the recommended vanilla guide: https://hackintosh.gitbook.io/-r-hackintosh-vanilla-desktop-guide/ and referenced https://github.com/Autocrit/Asus-ROG-STRIX-H370-I-GAMING-Hackintosh-Guide as well.

When i boot from the USB, the whole process freezes on this screen: https://imgur.com/a/xWKuiRd

>"PCI Configuration End. bridges 4, devices 13"

Someone else had a similar issue with an ASUS mobo: https://www.reddit.com/r/hackintosh/comments/eaqjm3/vanilla_install_stuck_on_pci_configuration_end/

His fix was to enable Serial Port. However for my mobo, I can't seem to find a serial port option under the advanced settings :(

I'm booting from a USB 2 through a 2.0 dongle (mobo only has 3.0). Any help would be appreciated :( i've tried different methods across different forums for 3 days, and everytime it results in a failure.

My config.plist: https://pastebin.com/6N8pT4bG

Clover folder Structure: https://imgur.com/a/zwWyU0C

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/charax_
πŸ“…︎ Jun 25 2020
🚨︎ report
Confirmed List of USB 3.0 PCI-e cards/Laptops/configurations which work for Kinect v2 (during preview)

Hello I'm reading this article :

https://social.msdn.microsoft.com/Forums/en-US/bb379e8b-4258-40d6-92e4-56dd95d7b0bb/confirmed-list-of-usb-30-pcie-cardslaptopsconfigurations-which-work-for-kinect-v2-during?forum=kinectv2sdk

to know which are the confirmed PCI-e, USB hubs, and laptop configurations that were confirmed to work during the Developer preview Kinect v2 (beta devices).

I see that there is also this device :

https://plugable.com/products/usb3-hub7-81x/

you know what ? I have it. I'm using it. And I tried to attach the kinect 2 there. And surprise : it does not work. as soon as I run the kinect verifier inside the windows 10 vm where it is attached,it quits. do u know the reason ?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/loziomario
πŸ“…︎ Aug 10 2020
🚨︎ report
What is my PCI-e lane configuration with my multiple storage devices and GPU?

I have recently added an M.2 SSD to my PC, and I have noticed a drop in performance in my games that require a lot of computing power.

I'm pretty sure that I have figured out the problem;

The NVME drive is taking up lanes that would otherwise be used for the GPU (x16) as I currently have three storage devices (A normal SSD -120gb- for my OS, a hard drive -2tb- and my new M.2 SSD -1tb-).

So my query is "What PCI-e lane configuration would be used with these three devices and my GPU? (Would it be x8(GPU), x4(M.2), x4(HDD), x4(OS-SSD)?) or something else? (maybe x4 for the chipset on the mobo?)

and what would fix this PCI-e lane bottleneck? I am currently planning to upgrade to a Ryzen 5 3600 and an MSI x570 gaming plus MOBO as Zen 2 would have more PCI-e lanes (~24). Maybe I should wait for Zen 3?.

Or maybe just getting rid of my OS drive altogether and releasing the bottleneck somewhat.

I hope I haven't been espousing bullshit throughout this post :)

If you need to know, specs:

  • Ryzen 5 2600 (which has 20 PCI-e lanes)
  • MSI B350 gaming plus mobo
  • RTX 2080 ti (YES I KNOW BOTTLENECK WHATEVER shhhh)

and the three storage devices:

  • SSD
  • HDD
  • M.2 SSD

If you need any other info, please ask.

TIA,

Cat

Edit: "The Ryzen 5 2600 includes 20 PCIe lanes - 16 for a GPU and 4 for storage (NVMe or 2 ports SATA Express)."

" The Ryzen 5 3600 has x16 for a GPU and x4 for storage (NVMe or 2 ports SATA Express). "

-Quoted from Wikichip

it seems that I can either have two SATA devices or a single NVME device from these articles so upgrading wouldn't solve much?

again I could be totally wrong.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/DaCatNextDoor
πŸ“…︎ Jul 19 2020
🚨︎ report
libvirt.libvirtError: unsupported configuration: host doesn't support passthrough of host PCI devices

Hello to everyone.

I'm trying to configure the passthrough of my graphic card nvidia geforce GTX 1060 from the host OS (I5 chipset + GIGABYTE GA-Z87-HD3, LGA 1150/SockeL H3, Intel Z87 Motherboard DDR3) which runs on Ubuntu 20.04) to a guest os with Windows server 2019. This is how I have configured everything :

  1. I set the internal graphic chipset as primary graphic device to boot the computer.
  1. lspci -nn | grep 01:00.

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] [10de:1c02] (rev a1)

01:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)

  1. /etc/modprobe.d/blacklist-nouveau.conf

blacklist nouveau

options nouveau modeset=0

blacklist nvidia

  1. /etc/modprobe.d/vfio.conf

options vfio-pci ids=10de:1c02,10de:10f1

options kvm ignore_msrs=1 report_ignored_msrs=0

options kvm-intel nested=y ept=y

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

  1. /etc/modprobe.d/nvidia.conf

softdep nouveau pre: vfio-pci

softdep nvidia pre: vfio-pci

softdep nvidia* pre: vfio-pci

softdep xhci_hcd pre: vfio-pci

softdep snd_hda_intel pre: vfio-pci

softdep xhci_hcd pre: vfio-pci

softdep i2c_nvidia_gpu: vfio-pci

  1. GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on"

In your opinion where could be the error ? because virt-manager says :

Errore nell'avvio del dominio: unsupported configuration: host doesn't support passthrough of host PCI devices

Traceback (most recent call last):

File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper

callback(asyncjob, *args, **kwargs)

File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb

callback(*args, **kwargs)

File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn

ret = fn(self, *args, **kwargs)

File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in startup

self._backend.create()

File "/usr/lib/python3/dist-packages/libvirt.py", line 1234, in create

if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)

libvirt.libvirtError: unsupported configuration: host doesn't support passthrough of host PCI devices

As soon as Ubuntu 20.04 start,I run this script :

#!/bin/sh

DEVICE="01:00.0"

#load vfio-pci module

sudo modprobe vfio-pci

for dev in "0000:$DEVICE"; do

`vendor=$(cat /sys/bus/pci/devices/$de

... keep reading on reddit ➑

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/loziomario
πŸ“…︎ Jul 02 2020
🚨︎ report
Will I have enough PCI Lanes for this configuration and use case?

I have quite a bit of hardware across multiple machines and I would like to combine the hardware into one workstation.

My concern is whether this configuration will work regarding PCI Lanes available to the CPU and Chipset. I will start with a rundown of the configuration in my mind and then look at the configuration in relation to PCI Lanes.

  • CPU: i9 9900k (+16 PCI Lanes)
  • Motherboard: ASUS ROG Maximum IX Code (+24 PCI Lanes from Chipset)
  • RAM: 64GB GSkill Trident Z
  • GPU1: ASUS Strix 2080Ti
  • GPU2: ASUS Strix 980
  • SSD1: 1TB Samsung NVME
  • SSD2: 500GB Samsung NVME
  • HDD1: 8TB IronWolf NAS Drive
  • HDD2: 8TB IronWolf NAS Drive
  • NIC: ASUS 10Gbe (Aquantia) Card
  • USB Expansion: Generic 4 Port USB 3.0 Card
  • PSU: 1000W Corsair

I have all the hardware already, I just do not want to start tearing apart my existing machines until I am sure it will fuction.

From the CPU and chipset I have 40 lanes available. But from what I have read, the CPU will supply either 16x to one GPU, 8x+8x to the 2 GPUs or 8x+4x+4x to the GPU1, GPU2 (or the other SSD?) and NVME storage respectively with the chipset covering "other devices" what ever that means.

If the GPUs and SSDs must get their lanes from the CPU directly rather than the chipset, then this configuration will not work (Should have gone Threadripper!).

The idea with this configuration is that I can run my Linux flavor of choice as the main OS, with Windows installed to the 500GB NVME SSD, which can be loaded into through a VM with the GPU, the USB expansion and the motherboard NIC passed through (have tested this on my spare machine and it works a treat!). The IronWolfs I am thinking of configuring using ZFS for redundant bulk storage, but I need to look into that more (and how to pass this to the Windows VM!).

I would also like the 2080ti at least to operate off a 16x lane as that will be the main workhorse in Linux. The 980 can limp on 8x if possible of the chipset for work in Windows on the TWO program suites I need it for (Autodesk and Creative Cloud!) and then 4x for each SSD, 4x for the 10GBe NIC and 4x for the USB expansion, using a total of:

16x for GPU1 (All 16 lanes from CPU), x8 for GPU2, x4 SSD1, x4 SSD2, x4 NIC, x4 USB Exp. (24 Lanes from Chipset).

To summarise my questions:

Is this configuration possible?

Is 16x required for GPU1 (workloads inc. rendering in software like Blender and simulations software). Can only find gaming comparisons which is not useful as gaming does not saturate the

... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/NeonBlizzard
πŸ“…︎ Jul 08 2020
🚨︎ report
Help me please! The GPU is stuck in pcie2, backplate to mb shield is 0mm space, can't push all in but can't open pci-e lever either. What do do now?
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/PrimeX121
πŸ“…︎ Apr 07 2021
🚨︎ report
Issue Installing OSX onto x99 - Previously worked, Moved to Xeon 2690v3 no longer makes it through USB Boot to installer - PCI Configuration Begin.

So i have previously had mojave working on the following system:

Asus X99-Pro
5820K
RX580 - then later swapped out for a Vega FE
adata SX8200 NVME

It worked great then i had an incident trying to merge partitions, lost disk space and decided it was time to throw my xeon 2690v3 i had sitting around into it.

So i test my bootable usb with my 5820k, works fine. Swap the Xeon in and it takes a big dump at the pci configuration begin stage.

Now the only difference i can think of is that the pciroot address in the devices component of clover is wrong or it is having some hissy fit about the APPL,Ig-platform-id association is there with no IGPU.

Can anyone shed some light on this? Has anyone else used a Xeon with no Igpu?

I'd love to figure this out.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Doucheos
πŸ“…︎ Mar 01 2020
🚨︎ report
OpenCore stuck after '[ PCI configuration end, ..'

Hi fellow hackintoshers,

inspired by this guide I tried switching from Clover to OpenCore, again. I have tried this several times already but never got close to making it work. This time I think I am quite close but I don't know how to debug the issue I am facing.

My screen looks like this:

OpenCore console output

I tried checking the debug output from OpenCore but it stops before Mac OS presumabely takes over.

I also tried boot parameters npci=0x2000 and npci=0x3000. What else can I try or how do I diagnose what is causing this issue? Thank you so much for any advice!

Link to OpenCore-EFI and Clover-EFI, which works (I removed HFPlus.efi from both).

My setup is:

  • Intel i7-9700K
  • Asus ROG MAXIMUS XI HERO
  • Asus Radeon RX VEGA 64 ROG STRIX
  • OpenCore 0.5.4
  • MacOS Catalina 10.15.1
  • FileVault enabled (thus the switch to VirtualSMC)

I am using SSDT-UIAC.aml as from USBMap, SSDT-PLUG.aml with PR_.CPU0 changed to SB_.PR00, SSDT-EC-USBX.aml and SSDT-AWAC.aml as in the sample.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/TRAP_GUY
πŸ“…︎ Jan 14 2020
🚨︎ report
VGA (PCI-E) configuration for THICC 3 5700 XT?
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/BlisteryEarth
πŸ“…︎ Dec 24 2019
🚨︎ report
Scoop – Jeff Bezos' Blue Origin aims to fly first passengers to space as early as April: The NS-14 mission was the 1st of 2 "stable configuration" flights, sources say, with the next targeted within six weeks and the first crewed flight six weeks after. twitter.com/thesheetztwee…
πŸ‘︎ 233
πŸ’¬︎
πŸ‘€︎ u/ragner11
πŸ“…︎ Jan 14 2021
🚨︎ report
Real space raid army...wracks configuration

In your opinion, what are our heavy options?

I was thinking 3 or smaller squads of Wracks with Hexrifles and 2 Talos...

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/lostintime102785
πŸ“…︎ Apr 12 2021
🚨︎ report
PS5 arriving in a few days. Placed PS4 for reference only. I wanted to check if cooling is enough. More than 1 inch each side with a small opening behind. Is this configuration advisable? Asking because if limited space.
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/PSNCF
πŸ“…︎ Mar 20 2021
🚨︎ report
Stuck at [pci configuration end, bridges 6, devices 17]

I have an HP omen 15 with I7-8750h Gtx 1060 16 gb ram

It gets stuck at the error message above, and any help would be appreciated. Also I’m trying to install Catalina, but Mojave has the same error

Thanks in advance

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/zedrox464
πŸ“…︎ Aug 25 2019
🚨︎ report
Space AK go brrrtttt. My AK74 in its current MSW configuration and yes I do hate my arms. reddit.com/gallery/lftk6f
πŸ‘︎ 60
πŸ’¬︎
πŸ‘€︎ u/odst_airsoft
πŸ“…︎ Feb 09 2021
🚨︎ report
Stuck at [PCI configuration begin] when adding a second NVMe drive

Hi folks,

I've been following the snazzyLabs video tutorial and associated text ones to install OpenCore and Catalina 10.15.2 on my system (Ryzen 2700 on ASUS X470 Gaming TUF with ASUS Arez Vega56)

I got it to boot the installer after a bit of fiddling (most notably npci=0x3000).

At this point I decided to add a second NVMe drive to use for OSX and be able to dual boot from the UEFI boot menu.

Problem now is that I'm stuck at the infamous [PCI configuration begin].
I've tried npci=0x2000, npci=0x3000, I've tried letting SSDTime try to resolve IRQ conflicts and adding the resulting SSDT-HPET.aml to my boot drive and patches to my config.plist....

Nothing I tried seem to have any effect, I'm not sure where to look next.

Can anyone point me in the right direction to debug this?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/anotherjulien
πŸ“…︎ Jan 26 2020
🚨︎ report
Good resources to understand configuration/phase space

Hello, I'm an undergraduate student studying mechanics and was wondering if anybody had any good recommendations for where to learn about configuration/phase spaces, in a more mathematical setting. I am familiar with differential geometry (tangent bundles, vector fields and lie derivative, differential forms and integration etc), and a little bit of Riemannian/lorentzian geometry, and was hoping to understand configuration and phase spaces a little more mathematically. My current university course using Landau, and although its a great book, it doesn't have what I'm looking for, for this particular topic. Any help is really appreciated!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Inari_best-boy
πŸ“…︎ Mar 11 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.