A list of puns related to "Raid Controller"
Hello guys, First post here!
I have just purchased a DELL POWEREDGE R730 with a SAS/SATA Backplane 3.5 Inch 8 Slot
I was sold it off ebay with 2 x 600GB SAS Drives.. Then when I switched it after I got it... Hey Presto.. Just the SATA RAID utility..
I opened the box to see there was no SAS RAID Controller in there... Why the seller decided to put two SAS drives in it I don't know.
My question is...
Dell PERC HBA330 SAS 12GB SATA 6GB PCI-E 3.0 HBA
Controller Card P2R3R
Will this card just slot into the RAID port on the motherboard and pick up the SAS Drives or do I have to change any of the cabling from the backplane?
Please help I don't want to waste anymore money.. I bought 2 x 4TB SAS 3.5 Inch drives to put in this server grr!!!
Cheers
JohnO
https://preview.redd.it/zjnkncp2kjb81.jpg?width=3024&format=pjpg&auto=webp&s=4a145304998ce4f7c14bd0e41f9976d8ba31a20e
Hi,im exploring the option of attaching a Qnap TR-004 to my Open Media Vault server via USB.
Now, since OMV does neither support nor allow to control the RAID via USB for reliability reasons, i try to understand if and how i can handle the raid management via hardware switches or Qnap external RAID manager for Windows.
The problem is though that i want to start with 1 HDD as a single disk and later add more and more HDDs over time, thus i would have to migrate from single disk to RAID 1 to eventually RAID 5.
Can i do this kind of gradual migration/expansion via "Qnap external RAID manager" by connecting the TR004 to a windows PC every time i have to manage/change the RAID configuration and still use the TR-004 on my OMV server?
Thanks for reading.
[edit:] as i learned in the meantime, its not possible to expand and change the RAID modes 1 by 1 without having to clear the disks every time you make a change as the TR-004 does not support RAID migration on expansion. Thus it would be impractical to start out with 1 Disk (of e.g. 14TB) and expand over time.
Hello, I have a Cisco C250 M1 and it currently has a SAS 1068E which can only support up to 2Tb drives and I am looking to get support for up to 6Tb drives. I am looking at a SAS 9211-8I and I just want to make sure that this will actually work in the server. I am not very confident when it comes to hardware and I was hoping some else here might have a bit more insight.
I'm used to using older/surplus server grade RAID adapters like the LSI/MegaRaid 9362-8i, but with recent Windows 10 (and presumably Windows 11) builds, the drivers are unstable/unsupported and cause BSODs.
What's the best option (preferably cheap/surplus but still cachey and performancey) for Windows 11/Server 2022 compatibility? Or do we have to go for retail adapters now?
I have never done a raid in d1 or d2 because as soon as I have to talk with another person my social anxiety causes me to shit my pants and profusely vomit. Last week, Trey took my controller because I walked in on him and my wife. Is there any way to do a raid without contributing in the slightest?
I'm looking at the Promise VTrak J5800sD specifically for backup storage. I do not know how many and what type of connectors this unit has. Their site is not as helpful as I'd expect for techie people and I realize they are friends with APPLE so simplicity might be overrun there. I did work for a SD that had a lot of the smaller pegasus units and had great support with them (Promise Tech support was great) but curious how they work in a more enterprise setting? Their model is only 10% more expensive than a DAS HP D3610 I was quoted and it has twice the storage/bays...
If I had an HPE P841 Raid card and connected it with some cables, would the promise tech box work like an HP DAS unit and see individual drives and let me create raid and hotspares etc? Or is their tech more proprietary and only used for like unraid or IT mode HBAs?
I have a SFF, with 8 drives, Dell R710 and a perc h700 controller. I used six of the drives in a raid-6 array and the last 2 drives in a raid-1 mirror.
The first raid array with the 6-drive/raid-6 has debian 11 bullseye running and whenever I restart my server it automatically loads into this. No problems here.
The question and issue is how do I install nextcloud on the 2-drive/raid-1 mirror and have it turn on and run as well with every server reboot?
Hello. First of all, i have mini pc (hp prodesk 400 g4) as my homelab server. I want to also have NAS at home so i can store all my data there. I decided not to buy typical NAS casing and electronics like Synology but to have simple metal shelf where the drives will be. The issue here is i realised i don't have idea how i will power those drives. The data traveling is solved via PCIe raid controller card which i'll connect to that mini pc and from there i can manage those drives. But those drives have also sata power connector which i don't know how do i apply.
Any ideas?
Hi guys, i just got in to building my new NAS and i discovered problem for me that proxmox is not able to directly passthrought disks in to the VM. So my only option in that case to get some PCI-E SATA or SAS controller. I tried to find something cheap but not bad and i found out that in close proximity of where i live somebody sells HP Smart Array P410 Controller 6G SAS/ 3G SATA 512 MB for really cheap. Does anybody have some experience with compatibility of this controller in TrueNas Core and proxmox ? I just want to pass them to the VM directly and create a software RAID. I want to create software RAID because i heard from few people that in case of HW raid controller failure it is really troublesome or in some cases almost impossible to get the data back.
Hi,
I know that with Dell Server if I need to replace the bare metal and I replace the chassis with the same one (e.g. R720XD with same CPU, ram etc) and move over the disks the system will boot up with no issues as if it it was in the old box. The only changes that I would need to make would be for the network MAC addresses. Does it work the same way with HP's? If say we have BL460c Gen8 could we take the disks in those boxes and put them in a different BL460c Gen8? Could we do it with the same gen CPU's but lower core?
If yes could we "upgrade"? Meaning could we take disks that were in a BL460c Gen8 boxes and put them in a BL460c Gen9?
Hey, I managed to get a new Supermicro 847BE1C-R1K28LPB for dirt cheap and wanted to know what raid controller should I use that can support 36 hdd's in raid 6-60. This will be for windows server. Thanks Preferably a cheap lsi or rebranded one (Example IBM m5015 but that one only supports up to 32 hdd's) as those you can sometime find them with the battery cheap, but any advise is recommended
Selling my homelab server, Cisco C220 M4S. I used it as my Vmware nested lab environement. Everything is green and healthy. Moving into a different career field within IT, not much use for it anymore. has the following Specs:
I had consumer grade SSD's see Picture 4x 1TB TeamgroupAX2, and 6x 256Gb Samsung 8560 Pro - again additional cost if requested.
Located near Reading, PA willing to travel 20 miles, willing to ship but prefer in-person.
Asking $1000
Hi all. I've hit a bit of a wall on my test server and I'm not sure where to look next. I've got a Dell PowerEdge T320, Windows Server 2022, 32GB of RAM, single processor 6 core Xeon, two SSDs in RAID 1 for OS (PERC H310) and four 2TB spinning drives for data. It may be important to note that while the two OS drives are in RAID 1, the four data drives are in a two way mirror via Storage Spaces and began as ReFS. It has 2 onboard gigabit NICs and I added a 2x10GB SFP card as well.
I have some non-critical VMs I wanted to shuffle to it and check things out a bit. The VMs were previously hosted by "old-host", which is a simple i5 desktop rig running Server 2019. All Hyper-V in this situation by the way. I shut the VMs down, exported locally within the i5-old-host-rig, and attempted to copy from old-host to T320-host over the wire via file share. Thing is, about 50-55% through the 46GB transfer, consistently, it hangs up. I don't get an error but the transfer status just seizes up. If I wait long enough I've even seen the entire copy process seemingly restart from the beginning only to hit the same wall again. If I try to cancel it sits there attempting to cancel endlessly. The rest of the OS is fine and snappy during all this.
Things I've tried:
So I can replicate this on multiple network cards, multiple file systems, and it's only isolated to this T320-host which doesn't seem to be under really
... keep reading on reddit β‘Hello.
So, yesterday, I got an alert on the little iDRAC LCD in the front of my server, saying the battery had less than 24 hours left. But... this server is out of support by Dell, so I don't think I can get a replacement from them. Doing a quick Amazon search led to many possible options, many of which were different shapes... so, which one do I need to get?
P.S. I'm aware that we are not supposed to sell in r/homelab. I'm not asking to buy/sell from anyone here. I'm just asking what the proper battery for my RAID controller is.
Thanks!
Hi there,
First of all my current Setup:
HPE ProLiant ML350 Gen9
2x Intel Xeon E5-2680 V4
64gb DDR4 ECC RAM
HPE B140i Raid Controller
2x 256Gb SSD's
2x 2Tb HDD's
RTX 2080 Super
Hypervisor: VMware ESXI 7.0 U2
Running 2 VM's
1 Gaming Server (Used as an temporary Gaming-Machine with the RTX, works well for its configuration)
2 Main-Server (Hosts casual gaming servers and an Webserver)
so im trying to get my HDD speeds to a "norma"l level with my built-in Raid controller, as I setup everything in Esxi (Used this guide: https://it-infrastructure.solutions/how-to-manage-hp-smart-array-raid-controllers-from-vmware-esxi/ ) I came across this:
https://support.hpe.com/hpesc/public/docDisplay?docId=a00106849de_de&docLocale=de_DE(I can read yes, but have trouble understanding it clearly)
Does it mean that I could install raid controller drivers on esxi 7.0 with this machine or not?Because the ML350 Gen9 isnt listed under "affected combinations" or is it just in general that the HPE B140i Raid Controller cant/(hasnt got drivers that) work with Esxi 7.0?
(And if there are driver(packages) that work with Esxi 7.0 for the B140i where can I find them?)
I mean I tried accessing the firmware section on HPE website for the B140i controller
Overview: https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04406959#N10238
But as soon I want to open
Firmware section: http://h20565.www2.hpe.com/portal/site/hpsc/public/psi/home/?sp4ts.oid=5293146 I get the "Website not accessible" error.
Also the raid controllers suggested by HP in the first link in the "For ProLiant Gen9 Server" section arent too pricey (Example: you get the HPE H240ar for 60β¬, the one without cache-memory) but I didnt saw firmware for esxi 7.0 only 6.7 on its firmware site. (is this a problem later?)
Firmware section: [https://support.hpe.com/connect/s/product?kmpmoid=7553523&tab=driversAndSoftware&environmentType=2200029&environmentSubtype=2000343&driversAndSoftwareSubtype=9000214](https://support.hpe.com/connect/s/product?kmpmoid=7553523&tab=driversAndSoftware&
... keep reading on reddit β‘I want to improve ZFS performance: I have 1TB WD RED CMR x 6 on a standard sata controller off the motherboard. Natually, this does not offer any caching or BBU unit.
I'm looking to purchase a controller with caching and Battery Backup Unit for my ZFS Drives. Obviously I won't be using the RAID functions of the controller, however I do want the caching and BBU functions. Does anyone know of the caching and BBU on the controller will still function even if the RAID is disabled? Or should I buy from a certain brand that offers caching and BBU without the RAID functions?
I'm running Windows Server 2019 on my home built server, and due to managing to get a good deal on some external WD drives which I'm shucking, I'm looking at RAID controllers, and struggling to find any consistent reviews for them. I've checked the usual suspects - Broadcom/LSI Logic, Adaptec, etc., but for every "this is the best RAID controller since sliced bread" type review, there's a "this melted my face off Indiana Jones-style" type horror story, so, has anyone got any recommendations for an 8 port SAS/SATA controller, preferably under USD$250 for the card, that's as relatively painless as possible to setup and maintain? This will be going in to the spouse-friendly server so I don't want anything that I'm going to have to be constantly fiddling with π
Looking for some expert(ish) advice on trying to improve my RAID 5 array. Currently it runs slow af, running fakeraid from mobo. I am looking to add a RAID controller, specifically the Dell Perc H700 PCI-E x 8, 1gb cache. Have pretty much no experience with dedicated RAID controllers so I just want to make sure I'm buying stuff that's going to work.
Is this is a decent controller choice for running internal RAID on a standalone box?
Secondly, I'm having a lot of trouble verifying the exact breakout cable I will need to hook up to my SATA hard drives, it's not even specified in the manual for that controller. Is it just an 8087 SAS-SATA breakout cable I need? The RAID-5 is a 3 member array, so I presume I just need a single x 4 breakout cable?
I recently was gifted a Dell Poweredge T410, and am having some troubles getting it set up. I'd like to preface this by saying that I know very little when it comes to building a server.
I was given three hard drives for this: a 14 TB and two 8 TB drives. All of these work, but I cannot get them to work correctly. The following information may be incorrect since I am new, and just comes from what information I have attempted to gather while getting this set up, so please be patient with me, and I'm happy to learn and answer questions regarding what I've done.
When originally setting this up, I installed Server 2019 (which I know isn't officially supported, but it does boot at least) using BIOS instead of UEFI. I did not realize there was much of a difference, but when I got into the OS, I realized my drives weren't being read. When booting, I press Ctrl+R to enter... the raid controller for BIOS? I think? In this window, I am able to select a raid style (0, 1, etc) and then select my drives to create virtual disks. I opted for 0, then selected each drive separately, creating 4 (those three drives plus my SSD with the OS on it) virtual disks. After doing this, they all show up in the OS, but they only show a total of 2 TB available. They also show a max of 2 TB in the raid controller when I set them up. Everything is partitioned as GPT, and I confirmed as much.
I learned (perhaps this is wrong) that in order to fix this, I need to set them up with the UEFI raid controller instead of the BIOS one. I learned that in order to reach this, I simply need to enter the System Setup (F2 on boot). I reinstalled the OS using UEFI (which boots perfectly). However, while pressing F2 on boot does say it's going to enter the system setup, it doesn't. It just skips over it entirely. After the initial POST information, I get a screen saying that UEFI is starting up, but then it just immediately boots into the OS instead.
Attempting to set up my drives with the Ctrl+R menu I found earlier does the same thing, as it's the BIOS version instead of the UEFI one. I believe that once I can enter the UEFI raid controller, I should be able to figure it out from there, but entering the UEFI raid controller seems impossible for me right now.
Is there anything I'm doing wrong based on this information? Am I looking in the wrong location? I've been told by friends that all I need to do is enter the raid controller to set my drives up, but entering the raid controller is my problem,
... keep reading on reddit β‘Hello, I have a Cisco C250 M1 and it currently has a SAS 1068E which can only support up to 2Tb drives and I am looking to get support for up to 6Tb drives. I am looking at a SAS 9211-8I and I just want to make sure that this will actually work in the server. I am not very confident when it comes to hardware and I was hoping some else here might have a bit more insight.
Just purchased a new server running Windows Server 2022 to be used as a file server and Domain controller in a small office environment. We only have about 10 users active at one time and don't put a lot of stress on the file server.
I didn't realize this server only has an HBA card and not a RAID controller. I usually run RAID 1 with a hot spare. Should I just purchase a RAID controller card or do I have other options?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.