β € Astronomers have captured the first clear shot of a moon-forming disk around a distant exoplanet. This image is taken by the Atacama Large Millimeter/submillimeter Array (ALMA).
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/MistWeaver80
πŸ“…︎ Nov 21 2021
🚨︎ report
Bad disk in a RAID 10 array has made the volume completely unaccessible. please help.

Hi peeps. I hgope someone here can help me.

I have an HP Microserver Gen 11 running Windows Server 2019. I have the OS on an SSD and I set up 4x 4TB hard drives up as a RAID 10, giving me 8TB mirrored.

A couple of days ago, one of the drives became a bit noisy and has since failed. In theory with a RAID 10, I should just be able to swap out the faulty disk and it should repopulate, but since that disk has failed, whe whole RAID array cannot be accessed.

Also, using any hard drive tools to detect which drive has failed is causing me issues, as they all see it as one big 8TB disk, rather than showing the 4 drives so that I can see which one is faulty.

I've tried using Hetman RAID recovery and Runtime RAID recovery for Windows. Neither are able to even detect the RAID configuration.

I chose to go down the RAID 10 route purely to keep my data safe and that it could be fixed by replacing the faulty drive.

Any help would be greatly appreciated. Thanks in advance.

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Jan 18 2022
🚨︎ report
What happens to an ACTIVE BTRFS RAID1 array when a single disk fails?

I'm having trouble figuring out what actions the filesystem will take (if any) when a disk fails in an ACTIVE raid1 array. I've read about how an array missing a disk will refuse to mount unless it's in "degraded" mode which is a mount option in btrfs, and that's all fine and makes sense but what about an ACTIVE in use array? When a disk fails will the array automatically remount itself in "degraded" mode, or will it just up and stop working right then and there? One of the points of RAID is high uptime, as when a disk fails (with proper config) your RAID array can keep chugging and serving users while a replacement disk is on the way in the mail.

For context I'm familiar with Linux mdadm as it's the solution I use now. I have 5x 3TB disks in a RAID6 with LVM on top blah blah blah. Recently one of the disks failed, and the array just keeps on chugging, giving me time to order a replacement (already on the way). Everything about my current solution is great except for 1 glaring issue: "traditional" RAID via mdadm needs disks of the same size, if I use a larger disk I'm wasting the extra space thats larger than the existing disks, hence my interest in BTRFS and the way it does RAID1(C3/4) and how it will use all space of differing size disks. In order for BTRFS to be a serious contender (at least for me) I need to know if it will behave at least in a similar way when the next failure occurs. Surprisingly this has been difficult to answer.

Bonus: What are reddits thoughts on RAID1 vs RAID1C3 and do you have any general recommendations for someone coming from the mdadm world?

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/ShoddyBrain
πŸ“…︎ Dec 29 2021
🚨︎ report
HP Smart Array P822 Raid card with EMC disk shelf and 15 drives - rebooted server now HP says "Unsupported configuration"?

Anyone know how to get around this without losing all data? I rebooted and my single array didn't show up. The HP Smart Storage Administrator software has a critical status message:

720 Smart Array P822 in Slot 1 has an unsupported configuration. You may reconfigure the controller, but the existing configuration and data will be overwritten and potentially lost.

I purchased the card and shelf a few months back and it's been rock solid. I was playing around with cables behind the disk shelf yesterday, but all was well before I rebooted the server just a bit ago. When I first went into HP SSA it only saw 13 of my 15 HDs. I did a rescan and it found them all. I tried unplugging/replugging the data cable from both the enclosure and the server. Then I tried power cycling the disk shelf enclosure. Next I rebooted the server. Rescanned with HP SSA and it properly identifies all the drives, but I guess it cannot figure out how the array was configured previously - what steps could I take? Anyone walked down this road before?

I flashed the card to the latest available BIOS when I got it a few months back but it's an older card and 2018 was the latest BIOS (8.32C). The self diagnostics on the board seem to be ok other than it stating the critical error message about the unsupported configuration.

UPDATE:

HP Support rudely informed me that the raid configuration is stored in the RIS part of the HPE hard drives and not the controller, and if I'm using non-HPE drives, they don't even know how that works then. They went above and beyond insulting me and finding ways to be unhelpful when I just had a few basic questions for an out of warranty product. I'm never buying HP again - and our whole shop is HP right now. Two hours of my life for a chat session to learn that the HP SSA does not have a way to create the array without initializing/erasing the data like most professional raid cards have. The "engineer" did tell me to try disconnecting the cache as according to their web guides (which I already found before chat) say to replace the cache module and contact support if there is an issue for further troubleshooting. But there was no further troubleshooting... thanks for nothing SANTOSH. :)

Also if you disconnect the cache, at least on the P822 card, the card says it will not work until the cache module is inserted - so why did support tell me to try the card with the cache module??

Support also said I could use any other HP raid card to detect

... keep reading on reddit ➑

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/gleep52
πŸ“…︎ Jan 15 2022
🚨︎ report
Question about Array disks.

Hi,

So I'm new to unRAID and I have 2 disks of 12 TB and now I'm confused.

Drive 1 is data and drive 2 is parity.

So now I want to expand my disk array (same 12 TB disk), should I go with 4 disks so I can do the following:

2 drives for data

2 drives for parity

Or should I only need 1 more for data and is 1 parity drive enough?

Kind Regards,

Sovjet

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/-o0Sovjet0o-
πŸ“…︎ Dec 01 2021
🚨︎ report
RAID5 vs RAID6 for an 8 disk array

Hi All!

I currently have a 4x8TB RAID5 array, and just bought another 4x8TB after filling that up - how could I not when WD Reds are on sale?

So now I need to make a choice - do I want to expand my current RAID array to a 48 TB RAID6 or 56TB RAID5? I'm leaning towards the latter because I only have the 8 ports on my RAID card so I can't expand it anymore, and I've got a backup of all the data on external hard drives. Though I'm not sure if that's the right choice

What would you do? (Aside from getting RAID card with more ports - I've learned my lesson and will next time)

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/ThatFireGuy0
πŸ“…︎ Nov 25 2021
🚨︎ report
Same Number of Read Errors on All Disks in Array

Hi all, new to unRAID here, been using FreeNAS/TrueNAS for the last handful of years. About a month ago I built a new server with an H310 in IT-mode and 8x12TB shucked WD Elements drives that I had tested with the 'thorough test' with the WD lifeguard tools prior to shucking and did the preclear plugin/test prior to putting them all into use during the initial build.

A couple of days ago I noticed my Plex stopped responding, checked the server, and saw a message how all 8 of the drives showed Read errors [all the exact same number of errors]. Rebooted the server, the errors cleared, and everything was fine. This evening I noticed something similar happened and got a screenshot:

https://imgur.com/a/3xbaUjb

Any ideas on why this would be happening?

Thanks for any advice you may have!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Janus67
πŸ“…︎ Dec 28 2021
🚨︎ report
Recover array after failed OS Disk

My OMV os disk failed and I've lost all my configs. How can I recovery my array that i setup with SnapRaid+MergerFS?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/badi95
πŸ“…︎ Dec 29 2021
🚨︎ report
Growing RAID array - Just replace the disks with bigger ones?

I have a 3-disk RAID 5 array consisting of 3 x 4 TB drives. If I just deactivate, replace, and rebuild each of those in turn with 8 TB drives, am I right in understanding that I can then use that extra space? Something I found in Synology's documentation seemed to imply that but it wasn't 100% clear.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/JuanTutrego
πŸ“…︎ Dec 05 2021
🚨︎ report
Moving to Unraid - A few questions on arrays/disks/expanding storage

I think I have made the decision to move to unraid for my new build.

At the moment I have 6 x 8TB disks in my server, I also have a handful of smaller disks (4 x 2tb, 4 x 4tb)

I want to migrate from my existing setup to something running Unraid, but ideally want to get the data of the bigger drives before moving these into the unraid system.

If I was to set up the new system with 6 x 4TB and 2 x 2TB, this "should" give me the capacity to move everything over. Would I then be able to replace some of the disks with 8tb disks from the old setup and slowly increase my available storage?

Second question, is it A) possible and B) advisable to migrate VM's from ESXi 7 to a system running unraid. Or would it make more sense to start from scratch and then move the data over from one running VM to another?

TIA

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/zharrt
πŸ“…︎ Dec 13 2021
🚨︎ report
Moving data from unassigned disk to a user share in array

Hi mates,

As the title suggests, I need help to move data from unassigned disk to a user share in array. The unassigned disk show signs of failure and I decided to shrink the array (follows unRAID wiki) and now the unassigned disk has some data in it that I'd like to move back to the user share in the array. Is there any way to do this? Thank you very much.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/tnkhai
πŸ“…︎ Nov 28 2021
🚨︎ report
Anyone Feeling Lucky: Lot of 10x 12 Bay SAS to 8Gb FC 3.5in Disk Array JBODs w/ Caddies ebay.com/itm/353511017538…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/lagerea
πŸ“…︎ Dec 01 2021
🚨︎ report
Replacing both parity disks and use them in the array.

Hi all. I've read a handful of parity swap threads and am still worried I'll screw something up so I appreciate your taking the time to answer. :)

I have 2x8TB parity disks and would like to replace those with 2x14TB disks. I would then like those 2x8TB to replace 2x5TB disks in the array. Based on a previous post it seems like I will want to replace the parity disks one at a time and then replace each data disk one at a time? Do I need to do a parity check between each step? That would be a lot of time and read/writes.

Would I be able to just do the parity swap procedure twice? The downside of this is that the array would not be available during each procedure right?

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/smakkz
πŸ“…︎ Oct 24 2021
🚨︎ report
The Good, the Bad and the Parity (a disk failed on HP Microserver Gen10+, and I’m rebuilding the array on the new Good one. Also, labels are an awesome thing to have)
πŸ‘︎ 69
πŸ’¬︎
πŸ‘€︎ u/human-exe
πŸ“…︎ Sep 30 2021
🚨︎ report
Accidentally pulled disk in running array

Yes, I did it. Yes, I did have the drive location plugin, but obviously it was inaccurate. I have dual drive parity, and now one of my disks as well as one of my parity drive has the dreaded red x. I'm trying to get my array shut down so I can fix this; would appreciate any help in how to do that.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Pash91
πŸ“…︎ Nov 03 2021
🚨︎ report
Netapp DS4246 24 Bay Disk Array for 200 $ ? good deal or not ?

Including
2x IOM6 Controllers (111-00190)
2x 580W PSU's (114-00087)

24 x Caddies (including screws(

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/davidsebi
πŸ“…︎ Oct 09 2021
🚨︎ report
DISK ARRAY SUGGESTION

Hi folks, as per title I would like an opinion on how shoukd I expand my array. At moment I have 6x 10TB hdd, should I expand my array adding another 10tb or go with 18TB?

My concerns:

If I loose an 18tb disk, it will be a lot of data It will take a lot of time for a parity build/rebuild/checks (estimated with my current gear around 35/40h) It will take some effort as I need to swap the 18tb disk with the 10tb parity disk and do a parity rebuild.

Thanks to whoever replies. If you could leave a comment explaining the why of option 1 or 2 and if I may have missed anything

View Poll

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Oct 17 2021
🚨︎ report
DIY ZoomZoom Disk Array

Chmuranet on their 10G boxes runs what we call a ZoomZoom array. This allows use to achieve disk I/O speeds of 10G. 1.2GB/s to be exact. Here is how to do the same at home.

--

OK, First thing get yourself a RAID card, go to Ebay, there you'll find many options.

You wouldn't game without a graphics card, don't your disks deserve just as much love? HW raid is just that, it off loads cycles from the CPU, and allows you to cache writes while you go off and do other things. You want one that has a cache, and does its work on the card (vendors like Promise and Rocket use the CPU in the driver). I recommend LSI or Areca (Areca has a better management interface, and the higher end cards have a bigger cache). With LSI you want a IR mode, not IT mode card (common for ZFS), 6gb/s is fine.

Areca: https://www.ebay.com/itm/185094566246?hash=item2b187ef166:g:Pe8AAOSworBhX7cn LSI: https://www.ebay.com/itm/184877414842?epid=6013411910&hash=item2b0b8d79ba:g:biIAAOSwWv9hV5mR

The LSI cards can be had for cheap, like 100USD or less, you want pci-e and at least 6GB/s, supporting RAID-50/60. Supermicro has a nice AOC card that is really LSI, as does Dell. We also use HP cards (P410, etc)

Second, as many disks as possible, preferably at least 6 drives, this allows the LSI card to break the large write into small pieces and write them across multiple disks at the same time, concurrently (think LFTP for write buffers). Parallel will always be faster. We do two RAID-5 Arrays put together as RAID-0 (RAID-50), complex for the card, but bleeding fast.

Third, Benjamin I have one word for you, "WriteBack". The Writeback setting tells the card that once the write is in cache, it is complete. This means that you don't have to wait for the data to be written to the disk, making it a memory to memory transfer, lickety split even. If you have dodgy power at home, you might want a BBU, a battery that allows the card to retain cache.

Fourth, you probably want to use EXT4, maybe XFS. EXT4 handles a mass of small files better than XFS. Filesystem Benchmark. It depends of your I/O profile, for example if you are running plex on the box, EXT4 is essential, plex has a huge number of very small files.

Fifth, use BCache, set-up a SSD/NVRam drive as cache that can front your disk "backing store", two important settings, ag

... keep reading on reddit ➑

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/wBuddha
πŸ“…︎ Oct 11 2021
🚨︎ report
Use RAID 0 Array of 3 disks for gaming?

Hi,

I just put 3 500 gig hard disks in my PC and configured them to be in RAID 0.
Crystaldiskmark Screenshot here, i'm curious if i should use it for games.. 250MB/s should be fine, right?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/LeWu_DYSEwnta
πŸ“…︎ Oct 18 2021
🚨︎ report
I formatted an array disk by mistake. Am I f*****???

Having a rough morning. Sorry if I leave any details out. My anxiety is through the roof.

Yesterday I started having issues with unRAID, got a message from my mac saying one of my shares were full even though I had more than 4TB available. Checked unRAID and saw an error that the parity was disabled due to errors. Diagnosed it to be a bad drive (brand new 5 months ago), disconnected it and packed it for warranty replacement.

This morning, I totally forgot I packed it (sleep depravation) and went to format the disk in unRAID. Well, I have a second disk that is the same model and it is part of the array which is only two disks. I formatted a disk that was part of the array. Rebooted and now it is showing the disk as disabled and I don't want to move unless I can get files I desperately need.

I kept only shares for movies and TV on the one that I formatted, so I am not too worried about that. I am worried about my personal photos and files on the other disk that was not formatted, this disk had shares only for things like this.

How should I go about, if at all possible, getting those files backed up? Can I just use the new config tool and add that one disk to the array and it will have the files? Or is there a better/safer way of doing this?

I am very desperate to get these files back as they are files/photos over the last decade+ of my daughter being born, family, financial docs, etc...

Thank you for any help.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/cr0wstuf
πŸ“…︎ Sep 25 2021
🚨︎ report
HPE Gen9 Disk Array failed, replaced the drive and now it's rebuilding but taking too long.

One of our ESXi hosts (HPE ProLiant Gen9) had an SSD drive failure (RAID1) today. I shut it down, replace the drive with a brand new and similar one and rebooted and went into Intelligent Provisioning to check whether everything was OK. It was, the drive was picked up and the array is being rebuilt but taking a very long time. It's been about an hour an only got to 2.8%.

The array consisted of two 1TB SSDs in RAID1 but only about 300GB was being used.

Should I just wait it out? It's been a while since I worked with non hot-plug servers so I'm afraid to reboot the thing while it's rebuilding.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Advanced_Path
πŸ“…︎ Aug 23 2021
🚨︎ report
Execute base64 encoded byte array from memory without writing to disk as a disguised process twitter.com/m3g9tr0n/stat…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/m8urn
πŸ“…︎ Oct 22 2021
🚨︎ report
Lost Raid 0 Array twice to: The Disk structure is corrupted and unreadable - How can I prevent this?

I recently acquired an OWC Thunderbay 6 48TB model.

I have backups of everything that I'm loading onto it, the goal for the array is just to have everything in one enclosure without needing to have multiple external drives plugged in to work on big projects.

I had almost finished loading it the first time, some 35TB or so, when my computer rebooted.

No big deal I thought, but after my pc came back up, and I tried to access the raid I was given this error:
"Disk structure is corrupted and unreadable" I tried for several hours to recover the partition layout, as all the drives were still testing fine physically, I had simply lost the array.

This model is software raid not hardware raid.

I configured it using OWC's softraid program.

The only option I had after all the attempts to repair the array was to format, and start over.

I was just about 10TB in this time when it happened again, and obviously I need to sort out why my pc is rebooting, but technically this could happen on any pc, and then I will have to reload the entire thing once again.

Is there a way to setup the raid so that even if the system reboots I won't lose the array configuration?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/wannabeating
πŸ“…︎ Aug 07 2021
🚨︎ report
Need help setting up the disk array configuration

Background​

I'm installing Proxmox Virtual Environment on a Dell PowerEdge T630 with a Dell PowerEdge RAID Controller (PERC) H330 Hardware RAID controller adapter and eight 3TB 7.2k 3.5" SAS HDDs. I have replaced the optical drive with SATA SDD for Proxmox installation and acts as a boot drive for my machine. I was contemplating using the PERC H330 to configure two of the physical disks as a RAID1 virtual disk which will store my ISOs/VMs and the backups. Then create the rest of the 6 HDDs in RAID10 mode which will be used as storage for all the activities I am going to do on the VMs. This storage array will be used by 3 - 4 VMs and the main VM (ubuntu-server) will run most/all of my docker containers while the rest are just for pen-testing. Few things I am planning to run on the containers

  • Frontends: Traefik, Portainer, Organizr, Heimdall
  • Smart Home: HA-Dockermon, Mosquitto MQTT Broker, ZoneMinder, MiFlora Plant Sensors
  • Databases: MariaDB, phpMyAdmin, InfluxDB, Postgres, Grafana
  • Downloaders: downloader, Transmission Bittorrent with VPN, SABnzbd, qBittorrent with VPN
  • Indexers: NZBHydra2, Jackett
  • PVRs: Lidarr, Radarr, Sonarr, LazyLibrarian
  • Media Servers: Airsonic, Plex, Emby, Jellyfin, Ombi, Tautulli, PhotoShow, Calibre and more
  • Media File Management: Bazarr, Picard, Handbrake, MKVToolNix, MakeMKV, FileBot, and more
  • System Utilities: Firefox, Glances, APCUPSD, Logarr, Guacamole, Dozzle, qDirStat, StatPing, SmokePing, and more.
  • Maintenance: Ouroboros and Docker-GC

However, there seems to be quite a bit of confusion between ZFS and HW RAID, and my research has brought me confusion rather than clarity.

Questions​

  • Does my storage array plan looks good or do I need to make changes?
  • Should I just go with the approach I have laid out or flash the H330 with IT mode for ZFS pools? For ZFS pools I am planning on the similar approach mirrored for backups/vms and rest for storage using RAID10.
  • If I go with the ZFS approach, Can I just create ZFS pools without flashing the PERC H330?

Note: I am a beginner to proxmox / VE / Raid arrays in general. My primary plan is to use this as a media server/dev environment. Not interested in setting up Unraid / TrueNas but

... keep reading on reddit ➑

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Dr_Rootz
πŸ“…︎ Jun 28 2021
🚨︎ report
Ram Disk: Can they be made and added to the array?

Hello All,

I am curious if its possible to create a ram disk and add it to the array to be used for Chia farming. Ive been playing around with chia for the past 3 weeks since i had some extra space on my unraid server after converting everything over to h265 through tdarr finally... my cache ssd has gained 18% of its life used up already in that short amount of time and since it would cost me nearly $800 to replace the pair at current market prices, i was looking into maybe maxxing out my server with a 512GB set of ram and using a ram drive instead of my SSDs... ive already earned north of 3k in the 3 weeks luckily, so this would be an upgrade paid for by the fruits of its labor...

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Storxusmc
πŸ“…︎ Jun 01 2021
🚨︎ report
Connecting a disk from array to non-unraid system

I've been googling around for some time and I cannot find an answer to a very simple, noobie question:

If i remove a disk from unRAID array and connect it to other linux system, will I be able to mount the drive as a "normal" xfs partition without data loss?

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/reddrid
πŸ“…︎ Jun 01 2021
🚨︎ report
I formatted drive 1 of a 4 disk array as fat32, but I can still mount it
root@proxmox:~# mount /dev/sda1 /mnt/hdd
mount: /mnt/hdd: more filesystems detected on /dev/sda1; use -t <type> or wipefs(8).
root@proxmox:~# mount /dev/sda1 /mnt/hdd -t btrfs
root@proxmox:~# btrfs filesystem show
Label: none  uuid: ee769797-3f41-4d7e-a589-1afcf721bc63
        Total devices 1 FS bytes used 30.85GiB
        devid    1 size 464.31GiB used 43.02GiB path /dev/nvme0n1p2

Label: 'public'  uuid: e63f973c-bb0e-4dc4-8d96-cf9dd6a625f6
        Total devices 4 FS bytes used 6.80TiB
        devid    1 size 2.73TiB used 2.68TiB path /dev/sda1
        devid    2 size 2.73TiB used 2.68TiB path /dev/sdb1
        devid    3 size 931.51GiB used 750.00GiB path /dev/sdf1
        devid    4 size 931.51GiB used 750.00GiB path /dev/sde1

In fstab I use the UUID, which fails because btrfs can't seem to find drive 1 anymore after my fuck up, but `mount` seems to see both the fat and btrfs filesystems. How can I get drive 1 back to being recognized by btrfs?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/josephsmendoza96
πŸ“…︎ Jul 16 2021
🚨︎ report
[Question] Unable to replace bad disk in array

Hello /r/zfs,

I've run into a problem and am hoping you can help. I have a pool of z2's. A disk went bad, usually I pull the old disk out, put the new one in and run zpool replace. This time however I'm running into a bizarre scenario where the pool thinks the disk is already being replaced, but it is not when you check the status. I've posted some output below to explain.

Any ideas?

[root@backupstor3 /]# zpool status
  pool: backup
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 904G in 13:27:12 with 0 errors on Wed Jun 23 04:23:03 2021
config:

        NAME                        STATE     READ WRITE CKSUM
        backup                      DEGRADED     0     0     0
          raidz2-0                  ONLINE       0     0     0
            wwn-0x5000cca244c318be  ONLINE       0     0     0
            wwn-0x50014ee2b2a72023  ONLINE       0     0     0
            wwn-0x50014ee2b2c5a91c  ONLINE       0     0     0
            wwn-0x50014ee0597bf0d6  ONLINE       0     0     0
            wwn-0x5000cca269e92629  ONLINE       0     0     0
            wwn-0x50014ee207feef03  ONLINE       0     0     0
            wwn-0x50014ee0aedb0b5b  ONLINE       0     0     0
            wwn-0x5000cca269e96c5c  ONLINE       0     0     0
            wwn-0x50014ee2b3791cc3  ONLINE       0     0     0
            wwn-0x50014eef01ab252f  ONLINE       0     0     0
            wwn-0x50014ee2b365b1dd  ONLINE       0     0     0
            wwn-0x5000cca269c440a7  ONLINE       0     0     0
          raidz2-1                  ONLINE       0     0     0
            wwn-0x50014ee208bbea7e  ONLINE       0     0     0
            wwn-0x5000c50087bda832  ONLINE       0     0     0
            wwn-0x50014ee2b31c3b4f  ONLINE       0     0     0
            wwn-0x50014ee20b6bd11b  ONLINE       0     0     0
            wwn-0x5000cca269ebe9b2  ONLINE       0     0     0
            wwn-0x50014ee208b6bb7d  ONLINE       0     0     0
            wwn-0x50014ee208749542  ONLINE       0     0     0
            wwn-0x5000cca25cd51410  ONLINE       0     0     0
            wwn-0x50014ee25d
... keep reading on reddit ➑

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/WanderingITGuy
πŸ“…︎ Jun 25 2021
🚨︎ report
What to expect from 10Gbe and SHR-1 in a 5-disk array

Updated to show test with Jumbo Frames enabled

I'm trying to get my head around what to realistically expect with a pure 10Gbe connection. Currently running af Synology 1621+ with Synology 10G adapter (Btrfs, SHR-1 with 5-disk 2TB array), UniFi Switch Flex XG and my Desktop with Asus 10Gbe adapter. Brand new Cat6/7 cabling.

Everything is setup with 1500 MTU as i can't get Jumbo Frames/9000 MTU to work. Whenever i enable MTU 9000 on everything, the Synology box becomes unreachably over the network.

Update: Jumbo Frames is working in DSM7 (see screenshot below). Last time i tried was on DSM 6.2.X. Still not not sure if this is on par with expected write speeds though. Numbers below translates to approx. 500mb/s write and 1050mb/s read

Currently i'm getting around 9000Mbps 9600Mbps read and only 1000Mbps 2100Mbps write to the Synology when testing via OpenSpeedTest in a Docker container (wifi disabled on the desktop). Screenshot attached

Seems low to me on the write but was wondering if this to be expected with SHR-1 and 10Gbe.

For what it's worth i just installed an update to the USW Flex XG which dropped the speed from around 2500 mbps to the stated 1000mpbs.

Any input is greatly appreciated. Thanks!

MTU 9000 (Jumbo Frames working)

MTU 1500

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Degofedal
πŸ“…︎ Jul 04 2021
🚨︎ report
Swapping parity drive with disk already in array

I only have two drives in my array and I want to swap which is the parity because the current parity is a faster drive and I would rather use that one for data. Both are 8TB. Running a parity check now so the drives should be identical at the end of it.

Is this the procedure I'm looking for?

https://wiki.unraid.net/The_parity_swap_procedure

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rjcarne
πŸ“…︎ Jun 29 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.